Test Report: KVM_Linux_crio 19339

                    
                      8887856610da967907ca11fca489a0af319d423c:2024-07-29:35555
                    
                

Test fail (31/320)

Order failed test Duration
43 TestAddons/parallel/Ingress 152.79
45 TestAddons/parallel/MetricsServer 326.95
54 TestAddons/StoppedEnableDisable 154.22
56 TestCertExpiration 1103.22
173 TestMultiControlPlane/serial/StopSecondaryNode 141.72
175 TestMultiControlPlane/serial/RestartSecondaryNode 57.59
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 375.23
180 TestMultiControlPlane/serial/StopCluster 141.81
240 TestMultiNode/serial/RestartKeepsNodes 335.1
242 TestMultiNode/serial/StopMultiNode 141.41
249 TestPreload 272.4
257 TestKubernetesUpgrade 426.42
292 TestPause/serial/SecondStartNoReconfiguration 55.21
328 TestStartStop/group/old-k8s-version/serial/FirstStart 268.92
346 TestStartStop/group/no-preload/serial/Stop 138.94
349 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.02
350 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
351 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 113.9
352 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
358 TestStartStop/group/old-k8s-version/serial/SecondStart 723.97
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.26
375 TestStartStop/group/embed-certs/serial/Stop 139.03
376 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.17
377 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
379 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.43
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 428.66
381 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 371.07
382 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 151.32
383 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.2
384 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 374.61
x
+
TestAddons/parallel/Ingress (152.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-145541 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-145541 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-145541 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b9cfcd35-b093-46c2-ae44-2c916c5de80b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b9cfcd35-b093-46c2-ae44-2c916c5de80b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004049521s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-145541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.521440655s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-145541 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.242
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-145541 addons disable ingress-dns --alsologtostderr -v=1: (1.827390081s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-145541 addons disable ingress --alsologtostderr -v=1: (7.657063083s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-145541 -n addons-145541
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-145541 logs -n 25: (1.170727451s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-664821                                                                     | download-only-664821 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| delete  | -p download-only-330185                                                                     | download-only-330185 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-423519 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | binary-mirror-423519                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45115                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-423519                                                                     | binary-mirror-423519 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| addons  | disable dashboard -p                                                                        | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | addons-145541                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | addons-145541                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-145541 --wait=true                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | addons-145541                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-145541 ssh cat                                                                       | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | /opt/local-path-provisioner/pvc-1e5ae59b-219f-4d33-8e28-ea4906311031_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | -p addons-145541                                                                            |                      |         |         |                     |                     |
	| ip      | addons-145541 ip                                                                            | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	| addons  | enable headlamp                                                                             | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | -p addons-145541                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | addons-145541                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-145541 ssh curl -s                                                                   | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-145541 addons                                                                        | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:37 UTC | 29 Jul 24 17:37 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-145541 addons                                                                        | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:37 UTC | 29 Jul 24 17:37 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-145541 ip                                                                            | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:38 UTC | 29 Jul 24 17:38 UTC |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:38 UTC | 29 Jul 24 17:39 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:39 UTC | 29 Jul 24 17:39 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:33:31
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:33:31.254351   96181 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:33:31.254589   96181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:33:31.254598   96181 out.go:304] Setting ErrFile to fd 2...
	I0729 17:33:31.254602   96181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:33:31.255151   96181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:33:31.256243   96181 out.go:298] Setting JSON to false
	I0729 17:33:31.257200   96181 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8131,"bootTime":1722266280,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:33:31.257270   96181 start.go:139] virtualization: kvm guest
	I0729 17:33:31.259081   96181 out.go:177] * [addons-145541] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:33:31.260749   96181 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 17:33:31.260801   96181 notify.go:220] Checking for updates...
	I0729 17:33:31.263249   96181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:33:31.264558   96181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:33:31.265678   96181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:33:31.266900   96181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:33:31.268192   96181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:33:31.270076   96181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:33:31.301942   96181 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 17:33:31.303026   96181 start.go:297] selected driver: kvm2
	I0729 17:33:31.303036   96181 start.go:901] validating driver "kvm2" against <nil>
	I0729 17:33:31.303047   96181 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:33:31.303793   96181 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:33:31.303871   96181 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:33:31.318919   96181 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:33:31.318973   96181 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:33:31.319241   96181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:33:31.319307   96181 cni.go:84] Creating CNI manager for ""
	I0729 17:33:31.319324   96181 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 17:33:31.319339   96181 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:33:31.319417   96181 start.go:340] cluster config:
	{Name:addons-145541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-145541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:33:31.319536   96181 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:33:31.321265   96181 out.go:177] * Starting "addons-145541" primary control-plane node in "addons-145541" cluster
	I0729 17:33:31.322476   96181 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:33:31.322514   96181 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:33:31.322525   96181 cache.go:56] Caching tarball of preloaded images
	I0729 17:33:31.322603   96181 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:33:31.322614   96181 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:33:31.322947   96181 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/config.json ...
	I0729 17:33:31.322975   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/config.json: {Name:mk1a0f78a238bdabf9ef6522c2d736b9c116177c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:33:31.323159   96181 start.go:360] acquireMachinesLock for addons-145541: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:33:31.323220   96181 start.go:364] duration metric: took 43.272µs to acquireMachinesLock for "addons-145541"
	I0729 17:33:31.323249   96181 start.go:93] Provisioning new machine with config: &{Name:addons-145541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-145541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:33:31.323323   96181 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 17:33:31.324830   96181 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 17:33:31.324981   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:33:31.325017   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:33:31.339807   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0729 17:33:31.340252   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:33:31.340897   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:33:31.340920   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:33:31.341260   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:33:31.341455   96181 main.go:141] libmachine: (addons-145541) Calling .GetMachineName
	I0729 17:33:31.341584   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:31.341713   96181 start.go:159] libmachine.API.Create for "addons-145541" (driver="kvm2")
	I0729 17:33:31.341740   96181 client.go:168] LocalClient.Create starting
	I0729 17:33:31.341781   96181 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 17:33:31.381294   96181 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 17:33:31.448044   96181 main.go:141] libmachine: Running pre-create checks...
	I0729 17:33:31.448067   96181 main.go:141] libmachine: (addons-145541) Calling .PreCreateCheck
	I0729 17:33:31.448555   96181 main.go:141] libmachine: (addons-145541) Calling .GetConfigRaw
	I0729 17:33:31.448977   96181 main.go:141] libmachine: Creating machine...
	I0729 17:33:31.448989   96181 main.go:141] libmachine: (addons-145541) Calling .Create
	I0729 17:33:31.449151   96181 main.go:141] libmachine: (addons-145541) Creating KVM machine...
	I0729 17:33:31.450356   96181 main.go:141] libmachine: (addons-145541) DBG | found existing default KVM network
	I0729 17:33:31.451054   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:31.450918   96203 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0729 17:33:31.451089   96181 main.go:141] libmachine: (addons-145541) DBG | created network xml: 
	I0729 17:33:31.451104   96181 main.go:141] libmachine: (addons-145541) DBG | <network>
	I0729 17:33:31.451113   96181 main.go:141] libmachine: (addons-145541) DBG |   <name>mk-addons-145541</name>
	I0729 17:33:31.451120   96181 main.go:141] libmachine: (addons-145541) DBG |   <dns enable='no'/>
	I0729 17:33:31.451132   96181 main.go:141] libmachine: (addons-145541) DBG |   
	I0729 17:33:31.451139   96181 main.go:141] libmachine: (addons-145541) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 17:33:31.451144   96181 main.go:141] libmachine: (addons-145541) DBG |     <dhcp>
	I0729 17:33:31.451149   96181 main.go:141] libmachine: (addons-145541) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 17:33:31.451155   96181 main.go:141] libmachine: (addons-145541) DBG |     </dhcp>
	I0729 17:33:31.451159   96181 main.go:141] libmachine: (addons-145541) DBG |   </ip>
	I0729 17:33:31.451171   96181 main.go:141] libmachine: (addons-145541) DBG |   
	I0729 17:33:31.451182   96181 main.go:141] libmachine: (addons-145541) DBG | </network>
	I0729 17:33:31.451191   96181 main.go:141] libmachine: (addons-145541) DBG | 
	I0729 17:33:31.456308   96181 main.go:141] libmachine: (addons-145541) DBG | trying to create private KVM network mk-addons-145541 192.168.39.0/24...
	I0729 17:33:31.521417   96181 main.go:141] libmachine: (addons-145541) DBG | private KVM network mk-addons-145541 192.168.39.0/24 created
	I0729 17:33:31.521450   96181 main.go:141] libmachine: (addons-145541) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541 ...
	I0729 17:33:31.521462   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:31.521384   96203 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:33:31.521475   96181 main.go:141] libmachine: (addons-145541) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 17:33:31.521639   96181 main.go:141] libmachine: (addons-145541) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 17:33:31.764637   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:31.764471   96203 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa...
	I0729 17:33:31.957899   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:31.957722   96203 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/addons-145541.rawdisk...
	I0729 17:33:31.957944   96181 main.go:141] libmachine: (addons-145541) DBG | Writing magic tar header
	I0729 17:33:31.957962   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541 (perms=drwx------)
	I0729 17:33:31.957978   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:33:31.957985   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 17:33:31.957999   96181 main.go:141] libmachine: (addons-145541) DBG | Writing SSH key tar header
	I0729 17:33:31.958009   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 17:33:31.958022   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:31.957838   96203 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541 ...
	I0729 17:33:31.958033   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541
	I0729 17:33:31.958043   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:33:31.958056   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:33:31.958062   96181 main.go:141] libmachine: (addons-145541) Creating domain...
	I0729 17:33:31.958072   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 17:33:31.958077   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:33:31.958090   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 17:33:31.958101   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:33:31.958110   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:33:31.958127   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home
	I0729 17:33:31.958141   96181 main.go:141] libmachine: (addons-145541) DBG | Skipping /home - not owner
	I0729 17:33:31.959213   96181 main.go:141] libmachine: (addons-145541) define libvirt domain using xml: 
	I0729 17:33:31.959239   96181 main.go:141] libmachine: (addons-145541) <domain type='kvm'>
	I0729 17:33:31.959248   96181 main.go:141] libmachine: (addons-145541)   <name>addons-145541</name>
	I0729 17:33:31.959253   96181 main.go:141] libmachine: (addons-145541)   <memory unit='MiB'>4000</memory>
	I0729 17:33:31.959258   96181 main.go:141] libmachine: (addons-145541)   <vcpu>2</vcpu>
	I0729 17:33:31.959263   96181 main.go:141] libmachine: (addons-145541)   <features>
	I0729 17:33:31.959271   96181 main.go:141] libmachine: (addons-145541)     <acpi/>
	I0729 17:33:31.959278   96181 main.go:141] libmachine: (addons-145541)     <apic/>
	I0729 17:33:31.959286   96181 main.go:141] libmachine: (addons-145541)     <pae/>
	I0729 17:33:31.959296   96181 main.go:141] libmachine: (addons-145541)     
	I0729 17:33:31.959305   96181 main.go:141] libmachine: (addons-145541)   </features>
	I0729 17:33:31.959313   96181 main.go:141] libmachine: (addons-145541)   <cpu mode='host-passthrough'>
	I0729 17:33:31.959321   96181 main.go:141] libmachine: (addons-145541)   
	I0729 17:33:31.959346   96181 main.go:141] libmachine: (addons-145541)   </cpu>
	I0729 17:33:31.959358   96181 main.go:141] libmachine: (addons-145541)   <os>
	I0729 17:33:31.959364   96181 main.go:141] libmachine: (addons-145541)     <type>hvm</type>
	I0729 17:33:31.959373   96181 main.go:141] libmachine: (addons-145541)     <boot dev='cdrom'/>
	I0729 17:33:31.959384   96181 main.go:141] libmachine: (addons-145541)     <boot dev='hd'/>
	I0729 17:33:31.959393   96181 main.go:141] libmachine: (addons-145541)     <bootmenu enable='no'/>
	I0729 17:33:31.959406   96181 main.go:141] libmachine: (addons-145541)   </os>
	I0729 17:33:31.959417   96181 main.go:141] libmachine: (addons-145541)   <devices>
	I0729 17:33:31.959425   96181 main.go:141] libmachine: (addons-145541)     <disk type='file' device='cdrom'>
	I0729 17:33:31.959438   96181 main.go:141] libmachine: (addons-145541)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/boot2docker.iso'/>
	I0729 17:33:31.959448   96181 main.go:141] libmachine: (addons-145541)       <target dev='hdc' bus='scsi'/>
	I0729 17:33:31.959457   96181 main.go:141] libmachine: (addons-145541)       <readonly/>
	I0729 17:33:31.959464   96181 main.go:141] libmachine: (addons-145541)     </disk>
	I0729 17:33:31.959473   96181 main.go:141] libmachine: (addons-145541)     <disk type='file' device='disk'>
	I0729 17:33:31.959487   96181 main.go:141] libmachine: (addons-145541)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:33:31.959501   96181 main.go:141] libmachine: (addons-145541)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/addons-145541.rawdisk'/>
	I0729 17:33:31.959512   96181 main.go:141] libmachine: (addons-145541)       <target dev='hda' bus='virtio'/>
	I0729 17:33:31.959523   96181 main.go:141] libmachine: (addons-145541)     </disk>
	I0729 17:33:31.959533   96181 main.go:141] libmachine: (addons-145541)     <interface type='network'>
	I0729 17:33:31.959546   96181 main.go:141] libmachine: (addons-145541)       <source network='mk-addons-145541'/>
	I0729 17:33:31.959558   96181 main.go:141] libmachine: (addons-145541)       <model type='virtio'/>
	I0729 17:33:31.959591   96181 main.go:141] libmachine: (addons-145541)     </interface>
	I0729 17:33:31.959615   96181 main.go:141] libmachine: (addons-145541)     <interface type='network'>
	I0729 17:33:31.959629   96181 main.go:141] libmachine: (addons-145541)       <source network='default'/>
	I0729 17:33:31.959640   96181 main.go:141] libmachine: (addons-145541)       <model type='virtio'/>
	I0729 17:33:31.959651   96181 main.go:141] libmachine: (addons-145541)     </interface>
	I0729 17:33:31.959666   96181 main.go:141] libmachine: (addons-145541)     <serial type='pty'>
	I0729 17:33:31.959678   96181 main.go:141] libmachine: (addons-145541)       <target port='0'/>
	I0729 17:33:31.959689   96181 main.go:141] libmachine: (addons-145541)     </serial>
	I0729 17:33:31.959701   96181 main.go:141] libmachine: (addons-145541)     <console type='pty'>
	I0729 17:33:31.959711   96181 main.go:141] libmachine: (addons-145541)       <target type='serial' port='0'/>
	I0729 17:33:31.959722   96181 main.go:141] libmachine: (addons-145541)     </console>
	I0729 17:33:31.959733   96181 main.go:141] libmachine: (addons-145541)     <rng model='virtio'>
	I0729 17:33:31.959746   96181 main.go:141] libmachine: (addons-145541)       <backend model='random'>/dev/random</backend>
	I0729 17:33:31.959753   96181 main.go:141] libmachine: (addons-145541)     </rng>
	I0729 17:33:31.959760   96181 main.go:141] libmachine: (addons-145541)     
	I0729 17:33:31.959770   96181 main.go:141] libmachine: (addons-145541)     
	I0729 17:33:31.959782   96181 main.go:141] libmachine: (addons-145541)   </devices>
	I0729 17:33:31.959789   96181 main.go:141] libmachine: (addons-145541) </domain>
	I0729 17:33:31.959800   96181 main.go:141] libmachine: (addons-145541) 
	I0729 17:33:31.964203   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:61:14:7f in network default
	I0729 17:33:31.964820   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:31.964835   96181 main.go:141] libmachine: (addons-145541) Ensuring networks are active...
	I0729 17:33:31.965681   96181 main.go:141] libmachine: (addons-145541) Ensuring network default is active
	I0729 17:33:31.965993   96181 main.go:141] libmachine: (addons-145541) Ensuring network mk-addons-145541 is active
	I0729 17:33:31.966505   96181 main.go:141] libmachine: (addons-145541) Getting domain xml...
	I0729 17:33:31.967238   96181 main.go:141] libmachine: (addons-145541) Creating domain...
	I0729 17:33:32.402185   96181 main.go:141] libmachine: (addons-145541) Waiting to get IP...
	I0729 17:33:32.402902   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:32.403286   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:32.403326   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:32.403243   96203 retry.go:31] will retry after 287.300904ms: waiting for machine to come up
	I0729 17:33:32.691769   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:32.692260   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:32.692288   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:32.692209   96203 retry.go:31] will retry after 343.601877ms: waiting for machine to come up
	I0729 17:33:33.037850   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:33.038295   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:33.038327   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:33.038257   96203 retry.go:31] will retry after 301.189756ms: waiting for machine to come up
	I0729 17:33:33.340710   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:33.341111   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:33.341136   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:33.341058   96203 retry.go:31] will retry after 573.552478ms: waiting for machine to come up
	I0729 17:33:33.915817   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:33.916267   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:33.916309   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:33.916225   96203 retry.go:31] will retry after 667.32481ms: waiting for machine to come up
	I0729 17:33:34.584997   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:34.585451   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:34.585481   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:34.585413   96203 retry.go:31] will retry after 908.789948ms: waiting for machine to come up
	I0729 17:33:35.495355   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:35.495740   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:35.495769   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:35.495704   96203 retry.go:31] will retry after 850.715135ms: waiting for machine to come up
	I0729 17:33:36.348259   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:36.348761   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:36.348789   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:36.348718   96203 retry.go:31] will retry after 1.473559482s: waiting for machine to come up
	I0729 17:33:37.824316   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:37.824678   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:37.824705   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:37.824646   96203 retry.go:31] will retry after 1.831409289s: waiting for machine to come up
	I0729 17:33:39.658781   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:39.659200   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:39.659228   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:39.659155   96203 retry.go:31] will retry after 1.571944606s: waiting for machine to come up
	I0729 17:33:41.233074   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:41.233482   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:41.233516   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:41.233440   96203 retry.go:31] will retry after 1.965774308s: waiting for machine to come up
	I0729 17:33:43.200345   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:43.200741   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:43.200765   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:43.200690   96203 retry.go:31] will retry after 2.970460633s: waiting for machine to come up
	I0729 17:33:46.174691   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:46.175085   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:46.175116   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:46.175013   96203 retry.go:31] will retry after 2.890326841s: waiting for machine to come up
	I0729 17:33:49.068417   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:49.068783   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:49.068804   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:49.068729   96203 retry.go:31] will retry after 3.99642521s: waiting for machine to come up
	I0729 17:33:53.067633   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:53.067989   96181 main.go:141] libmachine: (addons-145541) Found IP for machine: 192.168.39.242
	I0729 17:33:53.068016   96181 main.go:141] libmachine: (addons-145541) Reserving static IP address...
	I0729 17:33:53.068030   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has current primary IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:53.068310   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find host DHCP lease matching {name: "addons-145541", mac: "52:54:00:25:f4:2d", ip: "192.168.39.242"} in network mk-addons-145541
	I0729 17:33:53.214731   96181 main.go:141] libmachine: (addons-145541) DBG | Getting to WaitForSSH function...
	I0729 17:33:53.214767   96181 main.go:141] libmachine: (addons-145541) Reserved static IP address: 192.168.39.242
	I0729 17:33:53.214787   96181 main.go:141] libmachine: (addons-145541) Waiting for SSH to be available...
	I0729 17:33:53.217476   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:53.217820   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541
	I0729 17:33:53.217844   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find defined IP address of network mk-addons-145541 interface with MAC address 52:54:00:25:f4:2d
	I0729 17:33:53.217977   96181 main.go:141] libmachine: (addons-145541) DBG | Using SSH client type: external
	I0729 17:33:53.218002   96181 main.go:141] libmachine: (addons-145541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa (-rw-------)
	I0729 17:33:53.218047   96181 main.go:141] libmachine: (addons-145541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:33:53.218061   96181 main.go:141] libmachine: (addons-145541) DBG | About to run SSH command:
	I0729 17:33:53.218098   96181 main.go:141] libmachine: (addons-145541) DBG | exit 0
	I0729 17:33:53.221696   96181 main.go:141] libmachine: (addons-145541) DBG | SSH cmd err, output: exit status 255: 
	I0729 17:33:53.221714   96181 main.go:141] libmachine: (addons-145541) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 17:33:53.221721   96181 main.go:141] libmachine: (addons-145541) DBG | command : exit 0
	I0729 17:33:53.221726   96181 main.go:141] libmachine: (addons-145541) DBG | err     : exit status 255
	I0729 17:33:53.221733   96181 main.go:141] libmachine: (addons-145541) DBG | output  : 
	I0729 17:33:56.222144   96181 main.go:141] libmachine: (addons-145541) DBG | Getting to WaitForSSH function...
	I0729 17:33:56.224694   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.225062   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.225093   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.225204   96181 main.go:141] libmachine: (addons-145541) DBG | Using SSH client type: external
	I0729 17:33:56.225228   96181 main.go:141] libmachine: (addons-145541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa (-rw-------)
	I0729 17:33:56.225274   96181 main.go:141] libmachine: (addons-145541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:33:56.225286   96181 main.go:141] libmachine: (addons-145541) DBG | About to run SSH command:
	I0729 17:33:56.225316   96181 main.go:141] libmachine: (addons-145541) DBG | exit 0
	I0729 17:33:56.344777   96181 main.go:141] libmachine: (addons-145541) DBG | SSH cmd err, output: <nil>: 
	I0729 17:33:56.345061   96181 main.go:141] libmachine: (addons-145541) KVM machine creation complete!
	I0729 17:33:56.345375   96181 main.go:141] libmachine: (addons-145541) Calling .GetConfigRaw
	I0729 17:33:56.345885   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:56.346064   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:56.346238   96181 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:33:56.346253   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:33:56.347573   96181 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:33:56.347589   96181 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:33:56.347596   96181 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:33:56.347604   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.349869   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.350222   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.350247   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.350389   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:56.350590   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.350733   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.350888   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:56.351019   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:56.351202   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:56.351212   96181 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:33:56.448185   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:33:56.448216   96181 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:33:56.448229   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.450961   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.451286   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.451321   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.451437   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:56.451646   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.451830   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.451990   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:56.452306   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:56.452489   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:56.452501   96181 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:33:56.549426   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:33:56.549496   96181 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:33:56.549505   96181 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:33:56.549513   96181 main.go:141] libmachine: (addons-145541) Calling .GetMachineName
	I0729 17:33:56.549827   96181 buildroot.go:166] provisioning hostname "addons-145541"
	I0729 17:33:56.549860   96181 main.go:141] libmachine: (addons-145541) Calling .GetMachineName
	I0729 17:33:56.550048   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.552558   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.552891   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.552919   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.553018   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:56.553189   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.553354   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.553474   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:56.553626   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:56.553798   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:56.553810   96181 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-145541 && echo "addons-145541" | sudo tee /etc/hostname
	I0729 17:33:56.662617   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-145541
	
	I0729 17:33:56.662646   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.665196   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.665552   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.665586   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.665810   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:56.666023   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.666202   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.666346   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:56.666520   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:56.666680   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:56.666694   96181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-145541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-145541/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-145541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:33:56.769050   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:33:56.769091   96181 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 17:33:56.769118   96181 buildroot.go:174] setting up certificates
	I0729 17:33:56.769141   96181 provision.go:84] configureAuth start
	I0729 17:33:56.769154   96181 main.go:141] libmachine: (addons-145541) Calling .GetMachineName
	I0729 17:33:56.769458   96181 main.go:141] libmachine: (addons-145541) Calling .GetIP
	I0729 17:33:56.771893   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.772247   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.772268   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.772420   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.774712   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.774985   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.775010   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.775126   96181 provision.go:143] copyHostCerts
	I0729 17:33:56.775207   96181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 17:33:56.775332   96181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 17:33:56.775403   96181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 17:33:56.775461   96181 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.addons-145541 san=[127.0.0.1 192.168.39.242 addons-145541 localhost minikube]
	I0729 17:33:56.904923   96181 provision.go:177] copyRemoteCerts
	I0729 17:33:56.905004   96181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:33:56.905031   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.907713   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.908041   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.908077   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.908245   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:56.908426   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.908575   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:56.908702   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:33:56.986835   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:33:57.010570   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 17:33:57.033836   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:33:57.056508   96181 provision.go:87] duration metric: took 287.352961ms to configureAuth
	I0729 17:33:57.056534   96181 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:33:57.056692   96181 config.go:182] Loaded profile config "addons-145541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:33:57.056766   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:57.059447   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.059757   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.059785   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.059944   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:57.060103   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.060230   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.060348   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:57.060481   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:57.060673   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:57.060693   96181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:33:57.312436   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:33:57.312468   96181 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:33:57.312479   96181 main.go:141] libmachine: (addons-145541) Calling .GetURL
	I0729 17:33:57.313738   96181 main.go:141] libmachine: (addons-145541) DBG | Using libvirt version 6000000
	I0729 17:33:57.315599   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.315906   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.315937   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.316047   96181 main.go:141] libmachine: Docker is up and running!
	I0729 17:33:57.316061   96181 main.go:141] libmachine: Reticulating splines...
	I0729 17:33:57.316070   96181 client.go:171] duration metric: took 25.974318348s to LocalClient.Create
	I0729 17:33:57.316097   96181 start.go:167] duration metric: took 25.974384032s to libmachine.API.Create "addons-145541"
	I0729 17:33:57.316110   96181 start.go:293] postStartSetup for "addons-145541" (driver="kvm2")
	I0729 17:33:57.316126   96181 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:33:57.316150   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:57.316414   96181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:33:57.316439   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:57.318293   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.318591   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.318618   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.318719   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:57.318926   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.319084   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:57.319231   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:33:57.394349   96181 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:33:57.398422   96181 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:33:57.398443   96181 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 17:33:57.398511   96181 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 17:33:57.398536   96181 start.go:296] duration metric: took 82.416834ms for postStartSetup
	I0729 17:33:57.398569   96181 main.go:141] libmachine: (addons-145541) Calling .GetConfigRaw
	I0729 17:33:57.399137   96181 main.go:141] libmachine: (addons-145541) Calling .GetIP
	I0729 17:33:57.401585   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.401902   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.401929   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.402116   96181 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/config.json ...
	I0729 17:33:57.402293   96181 start.go:128] duration metric: took 26.078958709s to createHost
	I0729 17:33:57.402332   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:57.404344   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.404625   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.404649   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.404756   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:57.404958   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.405105   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.405222   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:57.405395   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:57.405556   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:57.405566   96181 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:33:57.501341   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722274437.480741329
	
	I0729 17:33:57.501367   96181 fix.go:216] guest clock: 1722274437.480741329
	I0729 17:33:57.501379   96181 fix.go:229] Guest: 2024-07-29 17:33:57.480741329 +0000 UTC Remote: 2024-07-29 17:33:57.402304592 +0000 UTC m=+26.183051826 (delta=78.436737ms)
	I0729 17:33:57.501411   96181 fix.go:200] guest clock delta is within tolerance: 78.436737ms
	I0729 17:33:57.501423   96181 start.go:83] releasing machines lock for "addons-145541", held for 26.178190487s
	I0729 17:33:57.501468   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:57.501729   96181 main.go:141] libmachine: (addons-145541) Calling .GetIP
	I0729 17:33:57.504025   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.504381   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.504412   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.504496   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:57.505058   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:57.505229   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:57.505345   96181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:33:57.505399   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:57.505447   96181 ssh_runner.go:195] Run: cat /version.json
	I0729 17:33:57.505472   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:57.507746   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.508029   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.508115   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.508142   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.508253   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:57.508401   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.508463   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.508487   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.508530   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:57.508668   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:57.508691   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:33:57.508827   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.509002   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:57.509130   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:33:57.601477   96181 ssh_runner.go:195] Run: systemctl --version
	I0729 17:33:57.607389   96181 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:33:57.769040   96181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:33:57.775545   96181 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:33:57.775650   96181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:33:57.792953   96181 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:33:57.792979   96181 start.go:495] detecting cgroup driver to use...
	I0729 17:33:57.793045   96181 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:33:57.811468   96181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:33:57.826077   96181 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:33:57.826142   96181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:33:57.840097   96181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:33:57.853831   96181 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:33:57.972561   96181 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:33:58.136458   96181 docker.go:233] disabling docker service ...
	I0729 17:33:58.136530   96181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:33:58.151265   96181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:33:58.164102   96181 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:33:58.276623   96181 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:33:58.392511   96181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:33:58.406905   96181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:33:58.424771   96181 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:33:58.424833   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.435285   96181 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:33:58.435352   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.445861   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.456768   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.467842   96181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:33:58.478680   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.491453   96181 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.509245   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.519956   96181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:33:58.529329   96181 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:33:58.529390   96181 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:33:58.541915   96181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:33:58.551442   96181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:33:58.674796   96181 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:33:59.052308   96181 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:33:59.052400   96181 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:33:59.057314   96181 start.go:563] Will wait 60s for crictl version
	I0729 17:33:59.057384   96181 ssh_runner.go:195] Run: which crictl
	I0729 17:33:59.061260   96181 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:33:59.102524   96181 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:33:59.102645   96181 ssh_runner.go:195] Run: crio --version
	I0729 17:33:59.129700   96181 ssh_runner.go:195] Run: crio --version
	I0729 17:33:59.275153   96181 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:33:59.338558   96181 main.go:141] libmachine: (addons-145541) Calling .GetIP
	I0729 17:33:59.341415   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:59.341713   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:59.341742   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:59.341974   96181 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:33:59.346512   96181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:33:59.359083   96181 kubeadm.go:883] updating cluster {Name:addons-145541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-145541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 17:33:59.359229   96181 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:33:59.359273   96181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:33:59.390018   96181 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 17:33:59.390090   96181 ssh_runner.go:195] Run: which lz4
	I0729 17:33:59.394262   96181 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 17:33:59.398567   96181 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 17:33:59.398608   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 17:34:00.710651   96181 crio.go:462] duration metric: took 1.316514502s to copy over tarball
	I0729 17:34:00.710724   96181 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 17:34:02.900978   96181 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.190220668s)
	I0729 17:34:02.901012   96181 crio.go:469] duration metric: took 2.190328331s to extract the tarball
	I0729 17:34:02.901023   96181 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 17:34:02.938961   96181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:34:02.982550   96181 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:34:02.982582   96181 cache_images.go:84] Images are preloaded, skipping loading
	I0729 17:34:02.982592   96181 kubeadm.go:934] updating node { 192.168.39.242 8443 v1.30.3 crio true true} ...
	I0729 17:34:02.982725   96181 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-145541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-145541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:34:02.982792   96181 ssh_runner.go:195] Run: crio config
	I0729 17:34:03.029296   96181 cni.go:84] Creating CNI manager for ""
	I0729 17:34:03.029318   96181 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 17:34:03.029328   96181 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:34:03.029350   96181 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.242 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-145541 NodeName:addons-145541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:34:03.029487   96181 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-145541"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:34:03.029548   96181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:34:03.039754   96181 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:34:03.039832   96181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 17:34:03.049461   96181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 17:34:03.065464   96181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:34:03.081096   96181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 17:34:03.096562   96181 ssh_runner.go:195] Run: grep 192.168.39.242	control-plane.minikube.internal$ /etc/hosts
	I0729 17:34:03.100157   96181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:34:03.112040   96181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:34:03.236931   96181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:34:03.253661   96181 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541 for IP: 192.168.39.242
	I0729 17:34:03.253685   96181 certs.go:194] generating shared ca certs ...
	I0729 17:34:03.253704   96181 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.253865   96181 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 17:34:03.435416   96181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt ...
	I0729 17:34:03.435447   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt: {Name:mkcdc05dbad796c476f02d51b3a2d88a15d0d683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.435610   96181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key ...
	I0729 17:34:03.435621   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key: {Name:mk0b8766ee3521c080cdd099e5be695daddeacb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.435695   96181 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 17:34:03.479194   96181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt ...
	I0729 17:34:03.479221   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt: {Name:mk0a16b6fef48a2455bf549200f59231422c45e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.479382   96181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key ...
	I0729 17:34:03.479395   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key: {Name:mk91b34d44bbab81d266825125925925d9e53f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.479468   96181 certs.go:256] generating profile certs ...
	I0729 17:34:03.479549   96181 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.key
	I0729 17:34:03.479563   96181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt with IP's: []
	I0729 17:34:03.551828   96181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt ...
	I0729 17:34:03.551853   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: {Name:mk2ca63031f899e556ef4a518b28dbec6a1faf6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.551991   96181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.key ...
	I0729 17:34:03.552001   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.key: {Name:mk270c9e5c3cb7083a0750c829f349028aecab2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.552065   96181 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key.bccb6cf7
	I0729 17:34:03.552083   96181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt.bccb6cf7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.242]
	I0729 17:34:03.671667   96181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt.bccb6cf7 ...
	I0729 17:34:03.671696   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt.bccb6cf7: {Name:mkfcad8a7b6f08239890db5a75dd879612f7fc5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.671839   96181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key.bccb6cf7 ...
	I0729 17:34:03.671857   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key.bccb6cf7: {Name:mkc1cbe6197ba105da01b2e8ce9bf54e050e4c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.671951   96181 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt.bccb6cf7 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt
	I0729 17:34:03.672038   96181 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key.bccb6cf7 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key
	I0729 17:34:03.672103   96181 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.key
	I0729 17:34:03.672128   96181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.crt with IP's: []
	I0729 17:34:03.763967   96181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.crt ...
	I0729 17:34:03.763995   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.crt: {Name:mkf837c991a91f96016882e96dd66956c2f5bd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.764141   96181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.key ...
	I0729 17:34:03.764151   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.key: {Name:mkca8e35f0205f8941a850440a2051578e9359b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.764306   96181 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:34:03.764339   96181 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:34:03.764363   96181 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:34:03.764389   96181 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 17:34:03.765019   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:34:03.789314   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:34:03.811545   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:34:03.833421   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:34:03.855707   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 17:34:03.877692   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 17:34:03.899963   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:34:03.923645   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:34:03.947570   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:34:03.969085   96181 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:34:03.989183   96181 ssh_runner.go:195] Run: openssl version
	I0729 17:34:03.995603   96181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:34:04.006642   96181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:34:04.011204   96181 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:34:04.011262   96181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:34:04.016987   96181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:34:04.027693   96181 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:34:04.031555   96181 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:34:04.031607   96181 kubeadm.go:392] StartCluster: {Name:addons-145541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-145541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:34:04.031705   96181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 17:34:04.031771   96181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 17:34:04.067523   96181 cri.go:89] found id: ""
	I0729 17:34:04.067602   96181 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 17:34:04.077578   96181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 17:34:04.087390   96181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 17:34:04.096793   96181 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 17:34:04.096823   96181 kubeadm.go:157] found existing configuration files:
	
	I0729 17:34:04.096887   96181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 17:34:04.105701   96181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 17:34:04.105762   96181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 17:34:04.114971   96181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 17:34:04.124022   96181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 17:34:04.124075   96181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 17:34:04.133329   96181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 17:34:04.142383   96181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 17:34:04.142434   96181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 17:34:04.151903   96181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 17:34:04.161268   96181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 17:34:04.161333   96181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 17:34:04.170791   96181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 17:34:04.362502   96181 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 17:34:14.912704   96181 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 17:34:14.912776   96181 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 17:34:14.912883   96181 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 17:34:14.913013   96181 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 17:34:14.913133   96181 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 17:34:14.913271   96181 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 17:34:14.914905   96181 out.go:204]   - Generating certificates and keys ...
	I0729 17:34:14.914989   96181 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 17:34:14.915079   96181 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 17:34:14.915150   96181 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 17:34:14.915203   96181 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 17:34:14.915261   96181 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 17:34:14.915346   96181 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 17:34:14.915433   96181 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 17:34:14.915597   96181 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-145541 localhost] and IPs [192.168.39.242 127.0.0.1 ::1]
	I0729 17:34:14.915653   96181 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 17:34:14.915766   96181 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-145541 localhost] and IPs [192.168.39.242 127.0.0.1 ::1]
	I0729 17:34:14.915851   96181 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 17:34:14.915945   96181 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 17:34:14.916008   96181 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 17:34:14.916087   96181 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 17:34:14.916174   96181 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 17:34:14.916259   96181 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 17:34:14.916316   96181 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 17:34:14.916369   96181 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 17:34:14.916414   96181 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 17:34:14.916488   96181 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 17:34:14.916562   96181 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 17:34:14.918225   96181 out.go:204]   - Booting up control plane ...
	I0729 17:34:14.918330   96181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 17:34:14.918398   96181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 17:34:14.918465   96181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 17:34:14.918600   96181 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 17:34:14.918673   96181 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 17:34:14.918709   96181 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 17:34:14.918820   96181 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 17:34:14.918889   96181 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 17:34:14.918938   96181 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.993498ms
	I0729 17:34:14.919006   96181 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 17:34:14.919079   96181 kubeadm.go:310] [api-check] The API server is healthy after 5.001415465s
	I0729 17:34:14.919171   96181 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 17:34:14.919302   96181 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 17:34:14.919392   96181 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 17:34:14.919549   96181 kubeadm.go:310] [mark-control-plane] Marking the node addons-145541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 17:34:14.919602   96181 kubeadm.go:310] [bootstrap-token] Using token: a4jki6.7rj17ttaoqkipt8u
	I0729 17:34:14.920937   96181 out.go:204]   - Configuring RBAC rules ...
	I0729 17:34:14.921055   96181 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 17:34:14.921135   96181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 17:34:14.921249   96181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 17:34:14.921396   96181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 17:34:14.921509   96181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 17:34:14.921618   96181 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 17:34:14.921757   96181 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 17:34:14.921795   96181 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 17:34:14.921841   96181 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 17:34:14.921850   96181 kubeadm.go:310] 
	I0729 17:34:14.921910   96181 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 17:34:14.921918   96181 kubeadm.go:310] 
	I0729 17:34:14.921998   96181 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 17:34:14.922007   96181 kubeadm.go:310] 
	I0729 17:34:14.922039   96181 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 17:34:14.922093   96181 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 17:34:14.922141   96181 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 17:34:14.922147   96181 kubeadm.go:310] 
	I0729 17:34:14.922195   96181 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 17:34:14.922206   96181 kubeadm.go:310] 
	I0729 17:34:14.922252   96181 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 17:34:14.922259   96181 kubeadm.go:310] 
	I0729 17:34:14.922320   96181 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 17:34:14.922423   96181 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 17:34:14.922504   96181 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 17:34:14.922516   96181 kubeadm.go:310] 
	I0729 17:34:14.922588   96181 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 17:34:14.922651   96181 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 17:34:14.922657   96181 kubeadm.go:310] 
	I0729 17:34:14.922744   96181 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a4jki6.7rj17ttaoqkipt8u \
	I0729 17:34:14.922843   96181 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f \
	I0729 17:34:14.922864   96181 kubeadm.go:310] 	--control-plane 
	I0729 17:34:14.922868   96181 kubeadm.go:310] 
	I0729 17:34:14.922935   96181 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 17:34:14.922941   96181 kubeadm.go:310] 
	I0729 17:34:14.923010   96181 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a4jki6.7rj17ttaoqkipt8u \
	I0729 17:34:14.923111   96181 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f 
	I0729 17:34:14.923123   96181 cni.go:84] Creating CNI manager for ""
	I0729 17:34:14.923130   96181 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 17:34:14.924657   96181 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 17:34:14.925913   96181 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 17:34:14.936704   96181 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 17:34:14.954347   96181 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 17:34:14.954447   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:14.954507   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-145541 minikube.k8s.io/updated_at=2024_07_29T17_34_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=addons-145541 minikube.k8s.io/primary=true
	I0729 17:34:14.971084   96181 ops.go:34] apiserver oom_adj: -16
	I0729 17:34:15.062005   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:15.562520   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:16.062052   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:16.562207   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:17.062276   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:17.562800   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:18.062785   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:18.562169   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:19.062060   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:19.562165   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:20.062918   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:20.562316   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:21.062340   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:21.562521   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:22.062718   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:22.562933   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:23.062033   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:23.562861   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:24.062196   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:24.562029   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:25.063040   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:25.562363   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:26.062342   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:26.562427   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:27.062083   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:27.562859   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:27.662394   96181 kubeadm.go:1113] duration metric: took 12.708011497s to wait for elevateKubeSystemPrivileges
	I0729 17:34:27.662441   96181 kubeadm.go:394] duration metric: took 23.630838114s to StartCluster
	I0729 17:34:27.662464   96181 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:27.662586   96181 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:34:27.663103   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:27.663306   96181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 17:34:27.663350   96181 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:34:27.663405   96181 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 17:34:27.663492   96181 addons.go:69] Setting yakd=true in profile "addons-145541"
	I0729 17:34:27.663503   96181 addons.go:69] Setting cloud-spanner=true in profile "addons-145541"
	I0729 17:34:27.663533   96181 addons.go:234] Setting addon yakd=true in "addons-145541"
	I0729 17:34:27.663536   96181 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-145541"
	I0729 17:34:27.663549   96181 addons.go:234] Setting addon cloud-spanner=true in "addons-145541"
	I0729 17:34:27.663542   96181 addons.go:69] Setting metrics-server=true in profile "addons-145541"
	I0729 17:34:27.663566   96181 config.go:182] Loaded profile config "addons-145541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:34:27.663587   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.663595   96181 addons.go:69] Setting registry=true in profile "addons-145541"
	I0729 17:34:27.663607   96181 addons.go:234] Setting addon metrics-server=true in "addons-145541"
	I0729 17:34:27.663577   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.663625   96181 addons.go:69] Setting volcano=true in profile "addons-145541"
	I0729 17:34:27.663634   96181 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-145541"
	I0729 17:34:27.663654   96181 addons.go:234] Setting addon volcano=true in "addons-145541"
	I0729 17:34:27.663655   96181 addons.go:69] Setting gcp-auth=true in profile "addons-145541"
	I0729 17:34:27.663674   96181 mustload.go:65] Loading cluster: addons-145541
	I0729 17:34:27.663683   96181 addons.go:69] Setting volumesnapshots=true in profile "addons-145541"
	I0729 17:34:27.663685   96181 addons.go:69] Setting storage-provisioner=true in profile "addons-145541"
	I0729 17:34:27.663705   96181 addons.go:234] Setting addon storage-provisioner=true in "addons-145541"
	I0729 17:34:27.663708   96181 addons.go:234] Setting addon volumesnapshots=true in "addons-145541"
	I0729 17:34:27.663490   96181 addons.go:69] Setting default-storageclass=true in profile "addons-145541"
	I0729 17:34:27.663727   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.663734   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.663742   96181 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-145541"
	I0729 17:34:27.663861   96181 config.go:182] Loaded profile config "addons-145541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:34:27.664024   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664064   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.663674   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.664152   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664195   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664194   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.663649   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.664222   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664259   96181 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-145541"
	I0729 17:34:27.663587   96181 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-145541"
	I0729 17:34:27.664295   96181 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-145541"
	I0729 17:34:27.664327   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.664427   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664450   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.663616   96181 addons.go:234] Setting addon registry=true in "addons-145541"
	I0729 17:34:27.663674   96181 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-145541"
	I0729 17:34:27.664519   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664522   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664535   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664555   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664613   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.664625   96181 addons.go:69] Setting helm-tiller=true in profile "addons-145541"
	I0729 17:34:27.664646   96181 addons.go:69] Setting inspektor-gadget=true in profile "addons-145541"
	I0729 17:34:27.664656   96181 addons.go:69] Setting ingress=true in profile "addons-145541"
	I0729 17:34:27.664672   96181 addons.go:234] Setting addon ingress=true in "addons-145541"
	I0729 17:34:27.664648   96181 addons.go:234] Setting addon helm-tiller=true in "addons-145541"
	I0729 17:34:27.664675   96181 addons.go:234] Setting addon inspektor-gadget=true in "addons-145541"
	I0729 17:34:27.664705   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664738   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664619   96181 addons.go:69] Setting ingress-dns=true in profile "addons-145541"
	I0729 17:34:27.664905   96181 addons.go:234] Setting addon ingress-dns=true in "addons-145541"
	I0729 17:34:27.664920   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664931   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.664940   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664965   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664987   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664998   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.665047   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.665070   96181 out.go:177] * Verifying Kubernetes components...
	I0729 17:34:27.665109   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.665130   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.665329   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.665339   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.665346   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.665357   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.665361   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.665362   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.665376   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.665400   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.665493   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.665646   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.680992   96181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:34:27.684128   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0729 17:34:27.684142   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0729 17:34:27.684277   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0729 17:34:27.684700   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.684963   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.685344   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.685365   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.685773   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.685936   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42421
	I0729 17:34:27.686474   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.686506   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.693026   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.693093   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41415
	I0729 17:34:27.693126   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.693141   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.693584   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.693872   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.693930   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.694320   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.694394   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.695023   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.695074   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.695478   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.696664   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I0729 17:34:27.697379   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0729 17:34:27.697848   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.698394   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.698414   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.698748   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.699308   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.699350   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.701936   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0729 17:34:27.702427   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.702969   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.702988   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.703457   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.704041   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.704079   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.705305   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.705348   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.705547   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.705580   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.705808   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.705848   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.705305   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.711607   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.711647   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.711500   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.711735   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.712449   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.712470   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.711557   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.713024   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.713042   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.713501   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.714097   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.714158   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.715340   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.715363   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.718957   96181 addons.go:234] Setting addon default-storageclass=true in "addons-145541"
	I0729 17:34:27.719004   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.719346   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.719367   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.735020   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37927
	I0729 17:34:27.739011   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0729 17:34:27.739578   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.740182   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.740203   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.740638   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.740902   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.741508   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.742203   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.742231   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.742630   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.742904   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.742997   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36209
	I0729 17:34:27.743060   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.743094   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:27.743107   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:27.743274   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:27.743287   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:27.743296   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:27.743303   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:27.743422   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.743907   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.743927   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.744279   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.744930   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.745021   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.745059   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.745669   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0729 17:34:27.746229   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.746624   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0729 17:34:27.746877   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.746904   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.747237   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.747416   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.748284   96181 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:34:27.749091   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.749161   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:27.749176   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 17:34:27.749270   96181 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 17:34:27.749931   96181 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:34:27.749952   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 17:34:27.749970   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.750073   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.750189   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0729 17:34:27.750484   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.750586   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0729 17:34:27.750940   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.750955   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.751166   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.751293   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.751330   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.751341   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.751462   96181 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 17:34:27.751979   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.752035   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.752268   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42681
	I0729 17:34:27.752379   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.752549   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.752567   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.752747   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0729 17:34:27.752972   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.753085   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.753682   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.753719   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.753719   96181 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 17:34:27.754076   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.754092   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.754584   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.754654   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.754960   96181 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 17:34:27.754979   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 17:34:27.755002   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.755078   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.755093   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.755118   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I0729 17:34:27.755191   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.755212   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.755443   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.755544   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.755741   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.756068   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.756371   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.756387   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.756598   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.756616   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.756935   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0729 17:34:27.757057   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.757254   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.757340   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.757354   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.757536   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.757800   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.757818   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.758563   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.758768   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.759004   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.759335   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.759794   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.759835   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.760107   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.760134   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.760595   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.760879   96181 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 17:34:27.760996   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.761463   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.761540   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.761856   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.762100   96181 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 17:34:27.762156   96181 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 17:34:27.762164   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.762167   96181 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 17:34:27.762186   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.762417   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.762811   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.762866   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.762900   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.764103   96181 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 17:34:27.764120   96181 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 17:34:27.764146   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.766607   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.767066   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
	I0729 17:34:27.767566   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.767615   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.767877   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.768118   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.768355   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.768597   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.769535   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.769958   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.769977   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.770231   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.770434   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.770615   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.770878   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.771504   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.772185   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.772202   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.772690   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.773042   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.774660   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.776256   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 17:34:27.777648   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 17:34:27.777667   96181 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 17:34:27.777687   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.779616   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38759
	I0729 17:34:27.780147   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.780716   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.780734   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.780794   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.781057   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.781075   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.781119   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.781303   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.781394   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.781586   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.781765   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.781939   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.784009   96181 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-145541"
	I0729 17:34:27.784067   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.784443   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.784552   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.784773   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0729 17:34:27.784980   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38371
	I0729 17:34:27.785140   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35489
	I0729 17:34:27.785651   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.785775   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.785852   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.785925   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I0729 17:34:27.786220   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.786232   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.786324   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.786331   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.786413   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.786431   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.786676   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.786770   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.786946   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.787112   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.787153   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.787990   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.788622   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.788666   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.789094   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.789910   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.789935   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.790108   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0729 17:34:27.790308   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.790521   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.790602   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.791034   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 17:34:27.791131   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.791153   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.791673   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.791846   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.792244   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.793482   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.793826   96181 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0729 17:34:27.793826   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 17:34:27.794872   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45385
	I0729 17:34:27.795007   96181 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0729 17:34:27.795323   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.796000   96181 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 17:34:27.796021   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 17:34:27.796041   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.796670   96181 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 17:34:27.796689   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 17:34:27.796707   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.796207   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.796786   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.797214   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.797828   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.797870   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.798633   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 17:34:27.799855   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 17:34:27.800617   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.800827   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.801139   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.801167   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.801200   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.801211   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.801401   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.801401   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.801572   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.801712   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.801755   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.801855   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.801920   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.802084   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.803851   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 17:34:27.805107   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 17:34:27.806312   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 17:34:27.807604   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 17:34:27.808681   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 17:34:27.808701   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 17:34:27.808740   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.809960   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44649
	I0729 17:34:27.809966   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42363
	I0729 17:34:27.810408   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.811036   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.811051   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.811454   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.811530   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.811614   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0729 17:34:27.812579   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.812619   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.812930   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.812958   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.812980   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.813316   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.813440   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.813460   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.813510   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.813569   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.814638   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.814797   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.814812   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.814960   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.815090   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.815191   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.815487   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0729 17:34:27.815595   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.815878   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.815969   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.816408   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.816426   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.816478   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.817201   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.817236   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46533
	I0729 17:34:27.817527   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.817609   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.818164   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.818190   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.818253   96181 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 17:34:27.818578   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.818659   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.818799   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.818957   96181 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 17:34:27.819834   96181 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 17:34:27.819854   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 17:34:27.819872   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.820472   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.820637   96181 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 17:34:27.820653   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 17:34:27.820670   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.820877   96181 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 17:34:27.820890   96181 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 17:34:27.820907   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.821070   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.821149   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0729 17:34:27.821629   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.822024   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.822047   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.822432   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.822632   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.824886   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.825056   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.825345   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.825380   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.825395   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.825577   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.825742   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.825806   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.825823   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.825853   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.825962   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.826092   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.826204   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.826307   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.826566   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.826686   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.826719   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.826741   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.826880   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.827080   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.827237   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.827353   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	W0729 17:34:27.829579   96181 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42572->192.168.39.242:22: read: connection reset by peer
	I0729 17:34:27.830266   96181 retry.go:31] will retry after 204.825198ms: ssh: handshake failed: read tcp 192.168.39.1:42572->192.168.39.242:22: read: connection reset by peer
	I0729 17:34:27.830304   96181 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 17:34:27.831799   96181 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 17:34:27.832765   96181 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 17:34:27.834131   96181 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 17:34:27.834138   96181 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 17:34:27.834194   96181 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 17:34:27.834226   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.836119   96181 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 17:34:27.836141   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 17:34:27.836165   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.837582   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.838171   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.838200   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.838316   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.838504   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.838653   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.838790   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.839165   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.839530   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.839557   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.839695   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.839877   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.840029   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.840167   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.846963   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I0729 17:34:27.847469   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.848038   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.848057   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.848425   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.848609   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.850321   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.852299   96181 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 17:34:27.853720   96181 out.go:177]   - Using image docker.io/busybox:stable
	I0729 17:34:27.855159   96181 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 17:34:27.855174   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 17:34:27.855187   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.857747   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.858071   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.858105   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.858274   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.858432   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.858566   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.858692   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	W0729 17:34:28.037417   96181 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42612->192.168.39.242:22: read: connection reset by peer
	I0729 17:34:28.037452   96181 retry.go:31] will retry after 403.151739ms: ssh: handshake failed: read tcp 192.168.39.1:42612->192.168.39.242:22: read: connection reset by peer
	I0729 17:34:28.141384   96181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 17:34:28.141395   96181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:34:28.183015   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 17:34:28.202367   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 17:34:28.202393   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 17:34:28.203450   96181 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 17:34:28.203468   96181 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 17:34:28.277314   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:34:28.289222   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 17:34:28.299650   96181 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 17:34:28.299672   96181 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 17:34:28.310544   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 17:34:28.315309   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 17:34:28.341325   96181 node_ready.go:35] waiting up to 6m0s for node "addons-145541" to be "Ready" ...
	I0729 17:34:28.344259   96181 node_ready.go:49] node "addons-145541" has status "Ready":"True"
	I0729 17:34:28.344279   96181 node_ready.go:38] duration metric: took 2.929261ms for node "addons-145541" to be "Ready" ...
	I0729 17:34:28.344286   96181 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:34:28.350504   96181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dfrfm" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:28.381747   96181 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 17:34:28.381770   96181 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 17:34:28.414157   96181 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 17:34:28.414180   96181 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 17:34:28.427125   96181 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 17:34:28.427148   96181 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 17:34:28.431969   96181 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 17:34:28.431991   96181 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 17:34:28.433988   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 17:34:28.435218   96181 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 17:34:28.435232   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 17:34:28.440133   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 17:34:28.440156   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 17:34:28.511422   96181 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 17:34:28.511450   96181 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 17:34:28.586992   96181 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 17:34:28.587023   96181 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 17:34:28.677952   96181 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 17:34:28.677974   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 17:34:28.688718   96181 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 17:34:28.688740   96181 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 17:34:28.735118   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 17:34:28.735144   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 17:34:28.738218   96181 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 17:34:28.738238   96181 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 17:34:28.740523   96181 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 17:34:28.740546   96181 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 17:34:28.757326   96181 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 17:34:28.757355   96181 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 17:34:28.806163   96181 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 17:34:28.806190   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 17:34:28.872016   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 17:34:28.898521   96181 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 17:34:28.898545   96181 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 17:34:28.910448   96181 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 17:34:28.910468   96181 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 17:34:28.948434   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 17:34:28.948456   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 17:34:28.967528   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 17:34:28.967552   96181 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 17:34:28.968754   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 17:34:29.017422   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 17:34:29.027884   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 17:34:29.138010   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 17:34:29.146515   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 17:34:29.146547   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 17:34:29.149661   96181 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 17:34:29.149679   96181 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 17:34:29.210077   96181 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 17:34:29.210102   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 17:34:29.283889   96181 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 17:34:29.283914   96181 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 17:34:29.415870   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 17:34:29.415898   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 17:34:29.467628   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 17:34:29.568459   96181 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 17:34:29.568495   96181 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 17:34:29.760001   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 17:34:29.760027   96181 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 17:34:29.846135   96181 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 17:34:29.846166   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 17:34:29.970844   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 17:34:29.970875   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 17:34:30.127847   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 17:34:30.215724   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 17:34:30.215749   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 17:34:30.356558   96181 pod_ready.go:102] pod "coredns-7db6d8ff4d-dfrfm" in "kube-system" namespace has status "Ready":"False"
	I0729 17:34:30.497860   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 17:34:30.497891   96181 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 17:34:30.664938   96181 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.523517069s)
	I0729 17:34:30.664973   96181 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 17:34:30.735058   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 17:34:31.183627   96181 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-145541" context rescaled to 1 replicas
	I0729 17:34:32.499236   96181 pod_ready.go:102] pod "coredns-7db6d8ff4d-dfrfm" in "kube-system" namespace has status "Ready":"False"
	I0729 17:34:33.888682   96181 pod_ready.go:92] pod "coredns-7db6d8ff4d-dfrfm" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:33.888706   96181 pod_ready.go:81] duration metric: took 5.538178544s for pod "coredns-7db6d8ff4d-dfrfm" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:33.888719   96181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:34.794666   96181 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 17:34:34.794708   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:34.797613   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:34.798086   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:34.798116   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:34.798299   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:34.798536   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:34.798722   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:34.798859   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:35.208998   96181 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 17:34:35.366046   96181 addons.go:234] Setting addon gcp-auth=true in "addons-145541"
	I0729 17:34:35.366107   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:35.366406   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:35.366435   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:35.381671   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0729 17:34:35.382145   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:35.382654   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:35.382677   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:35.383014   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:35.383471   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:35.383502   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:35.398272   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42119
	I0729 17:34:35.398719   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:35.399238   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:35.399268   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:35.399573   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:35.399757   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:35.401244   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:35.401473   96181 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 17:34:35.401494   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:35.404600   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:35.405082   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:35.405114   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:35.405289   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:35.405446   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:35.405597   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:35.405702   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:35.903712   96181 pod_ready.go:102] pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace has status "Ready":"False"
	I0729 17:34:36.628397   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.351042747s)
	I0729 17:34:36.628454   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.339203241s)
	I0729 17:34:36.628495   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628513   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628541   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.313213577s)
	I0729 17:34:36.628499   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.317924577s)
	I0729 17:34:36.628579   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628592   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628595   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628603   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628621   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.194611778s)
	I0729 17:34:36.628463   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628649   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628651   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628711   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.756664091s)
	I0729 17:34:36.628749   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628764   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628792   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.660013942s)
	I0729 17:34:36.628812   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628822   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628870   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.611408784s)
	I0729 17:34:36.628893   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628902   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628905   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.600998026s)
	I0729 17:34:36.628921   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628932   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.629003   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.490964528s)
	I0729 17:34:36.629021   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.629030   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.629160   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.161498941s)
	W0729 17:34:36.629191   96181 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 17:34:36.629229   96181 retry.go:31] will retry after 245.713684ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 17:34:36.629318   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.501441932s)
	I0729 17:34:36.629339   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.629349   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.631025   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632743   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632745   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632748   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632762   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632777   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632782   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632781   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632787   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632795   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632798   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632800   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632801   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632810   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632820   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632822   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632832   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632787   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632840   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632833   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.44977425s)
	I0729 17:34:36.632848   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632872   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632879   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632884   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632807   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632901   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632803   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632885   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632931   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632935   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632939   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632944   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632950   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632844   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632960   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632961   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632970   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632972   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632977   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632980   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632985   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632987   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632993   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632841   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632834   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633012   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633021   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.633028   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633032   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633037   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633039   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633044   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633052   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632961   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633060   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632951   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633195   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633205   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633371   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.633405   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633412   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633437   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.633471   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633525   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.633550   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633556   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633618   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.633661   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633669   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633678   96181 addons.go:475] Verifying addon metrics-server=true in "addons-145541"
	I0729 17:34:36.633004   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.633714   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633771   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.633792   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633798   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633805   96181 addons.go:475] Verifying addon ingress=true in "addons-145541"
	I0729 17:34:36.633974   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.633999   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.634009   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.634020   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.634041   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.634047   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.634000   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.634056   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.634067   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.634076   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.634083   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.634894   96181 out.go:177] * Verifying ingress addon...
	I0729 17:34:36.634935   96181 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-145541 service yakd-dashboard -n yakd-dashboard
	
	I0729 17:34:36.635404   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.635445   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.635455   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.637423   96181 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 17:34:36.638717   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.638727   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.638743   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.638740   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.638772   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.638785   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.638800   96181 addons.go:475] Verifying addon registry=true in "addons-145541"
	I0729 17:34:36.640270   96181 out.go:177] * Verifying registry addon...
	I0729 17:34:36.642131   96181 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 17:34:36.668697   96181 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 17:34:36.668720   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:36.668942   96181 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 17:34:36.668970   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:36.678386   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.678402   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.678686   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.678704   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 17:34:36.678809   96181 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0729 17:34:36.679325   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.679346   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.679564   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.679582   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.679613   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.875760   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 17:34:37.163475   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:37.184096   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:37.656128   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:37.663763   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:37.751958   96181 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.350452991s)
	I0729 17:34:37.753575   96181 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 17:34:37.754524   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.01941461s)
	I0729 17:34:37.754566   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:37.754575   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:37.754797   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:37.754811   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:37.754824   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:37.754833   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:37.755249   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:37.755278   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:37.755296   96181 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-145541"
	I0729 17:34:37.756975   96181 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 17:34:37.756987   96181 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 17:34:37.758979   96181 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 17:34:37.759000   96181 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 17:34:37.759769   96181 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 17:34:37.784163   96181 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 17:34:37.784184   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:37.860777   96181 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 17:34:37.860807   96181 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 17:34:37.963931   96181 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 17:34:37.963957   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 17:34:38.080753   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 17:34:38.151199   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:38.154588   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:38.274772   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:38.404938   96181 pod_ready.go:102] pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace has status "Ready":"False"
	I0729 17:34:38.645325   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:38.657233   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:38.768450   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:39.037267   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.161456531s)
	I0729 17:34:39.037318   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:39.037330   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:39.037699   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:39.037725   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:39.037737   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:39.037746   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:39.038038   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:39.038073   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:39.038118   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:39.141834   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:39.146261   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:39.287885   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:39.667052   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:39.667092   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:39.670555   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.589766398s)
	I0729 17:34:39.670599   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:39.670616   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:39.670902   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:39.670917   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:39.670925   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:39.670933   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:39.671153   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:39.671172   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:39.671197   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:39.672658   96181 addons.go:475] Verifying addon gcp-auth=true in "addons-145541"
	I0729 17:34:39.674354   96181 out.go:177] * Verifying gcp-auth addon...
	I0729 17:34:39.676684   96181 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 17:34:39.690783   96181 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 17:34:39.690802   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:39.765908   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:40.141745   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:40.150619   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:40.182626   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:40.265732   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:40.395430   96181 pod_ready.go:97] pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.242 HostIPs:[{IP:192.168.39
.242}] PodIP: PodIPs:[] StartTime:2024-07-29 17:34:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 17:34:31 +0000 UTC,FinishedAt:2024-07-29 17:34:37 +0000 UTC,ContainerID:cri-o://6df697aa3396b53386356e783a4377c6f6409f3fb0bccf2f93c6fa4920cbc2f6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://6df697aa3396b53386356e783a4377c6f6409f3fb0bccf2f93c6fa4920cbc2f6 Started:0xc002285200 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 17:34:40.395465   96181 pod_ready.go:81] duration metric: took 6.506738286s for pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace to be "Ready" ...
	E0729 17:34:40.395478   96181 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.242 HostIPs:[{IP:192.168.39.242}] PodIP: PodIPs:[] StartTime:2024-07-29 17:34:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 17:34:31 +0000 UTC,FinishedAt:2024-07-29 17:34:37 +0000 UTC,ContainerID:cri-o://6df697aa3396b53386356e783a4377c6f6409f3fb0bccf2f93c6fa4920cbc2f6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://6df697aa3396b53386356e783a4377c6f6409f3fb0bccf2f93c6fa4920cbc2f6 Started:0xc002285200 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 17:34:40.395487   96181 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.402702   96181 pod_ready.go:92] pod "etcd-addons-145541" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:40.402723   96181 pod_ready.go:81] duration metric: took 7.226843ms for pod "etcd-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.402734   96181 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.409378   96181 pod_ready.go:92] pod "kube-apiserver-addons-145541" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:40.409399   96181 pod_ready.go:81] duration metric: took 6.656909ms for pod "kube-apiserver-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.409409   96181 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.417324   96181 pod_ready.go:92] pod "kube-controller-manager-addons-145541" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:40.417342   96181 pod_ready.go:81] duration metric: took 7.925291ms for pod "kube-controller-manager-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.417352   96181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6sd2" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.424380   96181 pod_ready.go:92] pod "kube-proxy-v6sd2" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:40.424399   96181 pod_ready.go:81] duration metric: took 7.039978ms for pod "kube-proxy-v6sd2" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.424409   96181 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.642506   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:40.647058   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:40.680715   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:40.767396   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:40.793560   96181 pod_ready.go:92] pod "kube-scheduler-addons-145541" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:40.793587   96181 pod_ready.go:81] duration metric: took 369.170033ms for pod "kube-scheduler-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.793598   96181 pod_ready.go:38] duration metric: took 12.449299757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:34:40.793617   96181 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:34:40.793683   96181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:34:40.860217   96181 api_server.go:72] duration metric: took 13.19682114s to wait for apiserver process to appear ...
	I0729 17:34:40.860252   96181 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:34:40.860280   96181 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0729 17:34:40.866260   96181 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I0729 17:34:40.868136   96181 api_server.go:141] control plane version: v1.30.3
	I0729 17:34:40.868163   96181 api_server.go:131] duration metric: took 7.902752ms to wait for apiserver health ...
	I0729 17:34:40.868171   96181 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 17:34:40.999623   96181 system_pods.go:59] 19 kube-system pods found
	I0729 17:34:40.999668   96181 system_pods.go:61] "coredns-7db6d8ff4d-dfrfm" [8f7f3dfc-f445-447d-8b0f-f9768984eff7] Running
	I0729 17:34:40.999678   96181 system_pods.go:61] "coredns-7db6d8ff4d-sn87l" [d46ccfce-d103-42de-a6ae-00bf710b59a3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0729 17:34:40.999686   96181 system_pods.go:61] "csi-hostpath-attacher-0" [16a60d4b-4133-4f9e-ae7d-b4abafb1c2e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 17:34:40.999692   96181 system_pods.go:61] "csi-hostpath-resizer-0" [fe8391ab-3ece-485a-812b-3821cd2dbbcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 17:34:40.999700   96181 system_pods.go:61] "csi-hostpathplugin-p9qp9" [2c479653-5761-44ea-8d45-514170d3db15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 17:34:40.999704   96181 system_pods.go:61] "etcd-addons-145541" [099a502a-2c8f-42c2-87dc-361eae8baa07] Running
	I0729 17:34:40.999709   96181 system_pods.go:61] "kube-apiserver-addons-145541" [04ca4891-47d7-45eb-a209-60d485c67801] Running
	I0729 17:34:40.999714   96181 system_pods.go:61] "kube-controller-manager-addons-145541" [be6f595a-7b71-4995-a979-12490e8d99d4] Running
	I0729 17:34:40.999723   96181 system_pods.go:61] "kube-ingress-dns-minikube" [dc7be156-e078-4c48-931f-5daba154a3f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 17:34:40.999727   96181 system_pods.go:61] "kube-proxy-v6sd2" [4a80c5a1-59ca-4e68-b237-5e7e03f8c23e] Running
	I0729 17:34:40.999732   96181 system_pods.go:61] "kube-scheduler-addons-145541" [31414739-297a-4811-9da1-c9a50a3ac824] Running
	I0729 17:34:40.999741   96181 system_pods.go:61] "metrics-server-c59844bb4-twcpr" [729a1011-260e-49bc-9fe9-0f5a13a4f5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 17:34:40.999750   96181 system_pods.go:61] "nvidia-device-plugin-daemonset-4gjrg" [3288c0c8-9742-44dc-985f-33455a462b79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 17:34:40.999763   96181 system_pods.go:61] "registry-698f998955-9qnhg" [ca8784f3-5a3c-4e49-b99f-0f6a32e7c737] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 17:34:40.999771   96181 system_pods.go:61] "registry-proxy-dgtch" [621f0921-7ec4-4046-b693-3dd1b6619b44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 17:34:40.999784   96181 system_pods.go:61] "snapshot-controller-745499f584-hghjr" [3753f85c-83f0-4f02-962f-8bcd30183cc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 17:34:40.999790   96181 system_pods.go:61] "snapshot-controller-745499f584-r6j6p" [f4dd2b5d-ada4-4612-a5bc-63c97bc31200] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 17:34:40.999795   96181 system_pods.go:61] "storage-provisioner" [5cd58c8b-201b-433a-917f-1382e5a8fa0a] Running
	I0729 17:34:40.999800   96181 system_pods.go:61] "tiller-deploy-6677d64bcd-d7vqp" [01075b35-8252-425f-8fc5-05b87bfaccdb] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 17:34:40.999806   96181 system_pods.go:74] duration metric: took 131.629741ms to wait for pod list to return data ...
	I0729 17:34:40.999815   96181 default_sa.go:34] waiting for default service account to be created ...
	I0729 17:34:41.142635   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:41.146329   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:41.180187   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:41.191916   96181 default_sa.go:45] found service account: "default"
	I0729 17:34:41.191940   96181 default_sa.go:55] duration metric: took 192.11702ms for default service account to be created ...
	I0729 17:34:41.191951   96181 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 17:34:41.271129   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:41.400020   96181 system_pods.go:86] 19 kube-system pods found
	I0729 17:34:41.400062   96181 system_pods.go:89] "coredns-7db6d8ff4d-dfrfm" [8f7f3dfc-f445-447d-8b0f-f9768984eff7] Running
	I0729 17:34:41.400075   96181 system_pods.go:89] "coredns-7db6d8ff4d-sn87l" [d46ccfce-d103-42de-a6ae-00bf710b59a3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0729 17:34:41.400086   96181 system_pods.go:89] "csi-hostpath-attacher-0" [16a60d4b-4133-4f9e-ae7d-b4abafb1c2e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 17:34:41.400095   96181 system_pods.go:89] "csi-hostpath-resizer-0" [fe8391ab-3ece-485a-812b-3821cd2dbbcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 17:34:41.400112   96181 system_pods.go:89] "csi-hostpathplugin-p9qp9" [2c479653-5761-44ea-8d45-514170d3db15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 17:34:41.400123   96181 system_pods.go:89] "etcd-addons-145541" [099a502a-2c8f-42c2-87dc-361eae8baa07] Running
	I0729 17:34:41.400133   96181 system_pods.go:89] "kube-apiserver-addons-145541" [04ca4891-47d7-45eb-a209-60d485c67801] Running
	I0729 17:34:41.400143   96181 system_pods.go:89] "kube-controller-manager-addons-145541" [be6f595a-7b71-4995-a979-12490e8d99d4] Running
	I0729 17:34:41.400155   96181 system_pods.go:89] "kube-ingress-dns-minikube" [dc7be156-e078-4c48-931f-5daba154a3f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 17:34:41.400164   96181 system_pods.go:89] "kube-proxy-v6sd2" [4a80c5a1-59ca-4e68-b237-5e7e03f8c23e] Running
	I0729 17:34:41.400174   96181 system_pods.go:89] "kube-scheduler-addons-145541" [31414739-297a-4811-9da1-c9a50a3ac824] Running
	I0729 17:34:41.400186   96181 system_pods.go:89] "metrics-server-c59844bb4-twcpr" [729a1011-260e-49bc-9fe9-0f5a13a4f5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 17:34:41.400200   96181 system_pods.go:89] "nvidia-device-plugin-daemonset-4gjrg" [3288c0c8-9742-44dc-985f-33455a462b79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 17:34:41.400212   96181 system_pods.go:89] "registry-698f998955-9qnhg" [ca8784f3-5a3c-4e49-b99f-0f6a32e7c737] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 17:34:41.400225   96181 system_pods.go:89] "registry-proxy-dgtch" [621f0921-7ec4-4046-b693-3dd1b6619b44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 17:34:41.400237   96181 system_pods.go:89] "snapshot-controller-745499f584-hghjr" [3753f85c-83f0-4f02-962f-8bcd30183cc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 17:34:41.400252   96181 system_pods.go:89] "snapshot-controller-745499f584-r6j6p" [f4dd2b5d-ada4-4612-a5bc-63c97bc31200] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 17:34:41.400261   96181 system_pods.go:89] "storage-provisioner" [5cd58c8b-201b-433a-917f-1382e5a8fa0a] Running
	I0729 17:34:41.400275   96181 system_pods.go:89] "tiller-deploy-6677d64bcd-d7vqp" [01075b35-8252-425f-8fc5-05b87bfaccdb] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 17:34:41.400287   96181 system_pods.go:126] duration metric: took 208.329309ms to wait for k8s-apps to be running ...
	I0729 17:34:41.400301   96181 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 17:34:41.400360   96181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:34:41.439928   96181 system_svc.go:56] duration metric: took 39.616511ms WaitForService to wait for kubelet
	I0729 17:34:41.439963   96181 kubeadm.go:582] duration metric: took 13.776574462s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:34:41.439988   96181 node_conditions.go:102] verifying NodePressure condition ...
	I0729 17:34:41.592232   96181 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:34:41.592258   96181 node_conditions.go:123] node cpu capacity is 2
	I0729 17:34:41.592283   96181 node_conditions.go:105] duration metric: took 152.288045ms to run NodePressure ...
	I0729 17:34:41.592295   96181 start.go:241] waiting for startup goroutines ...
	I0729 17:34:41.642322   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:41.645934   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:41.681116   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:41.765411   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:42.142042   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:42.146201   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:42.179730   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:42.266636   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:42.643003   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:42.646635   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:42.682076   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:42.766376   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:43.142828   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:43.146614   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:43.180012   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:43.266046   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:43.895194   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:43.900493   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:43.900902   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:43.901028   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:44.142049   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:44.145984   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:44.180529   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:44.265659   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:44.641923   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:44.645856   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:44.680141   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:44.765331   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:45.142535   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:45.146534   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:45.180404   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:45.265739   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:45.643531   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:45.647141   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:45.679501   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:45.765897   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:46.142419   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:46.145879   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:46.180597   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:46.269383   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:46.642821   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:46.646409   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:46.679978   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:46.765922   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:47.142590   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:47.146035   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:47.181523   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:47.265814   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:47.642041   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:47.645866   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:47.680154   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:47.766057   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:48.141874   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:48.146091   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:48.179403   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:48.264883   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:48.642596   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:48.646002   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:48.680429   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:48.765396   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:49.141941   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:49.146080   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:49.180795   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:49.265919   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:49.642505   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:49.646207   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:49.680047   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:49.766424   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:50.141918   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:50.146888   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:50.180319   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:50.265357   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:50.644960   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:50.647799   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:50.680721   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:50.767039   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:51.142107   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:51.146280   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:51.180408   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:51.266235   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:51.641530   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:51.647993   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:51.680683   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:51.766915   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:52.142562   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:52.146426   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:52.180283   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:52.266400   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:52.642206   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:52.645623   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:52.680974   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:52.765634   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:53.143654   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:53.146889   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:53.180360   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:53.266094   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:53.641985   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:53.645989   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:53.681269   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:53.766237   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:54.142398   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:54.145980   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:54.181251   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:54.265538   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:54.641958   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:54.646251   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:54.680092   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:54.766712   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:55.142666   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:55.146684   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:55.180356   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:55.269281   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:55.642303   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:55.645870   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:55.680805   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:55.765997   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:56.141536   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:56.146675   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:56.180682   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:56.267076   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:56.641901   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:56.645791   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:56.680533   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:56.765344   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:57.149630   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:57.156147   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:57.185555   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:57.265917   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:57.651568   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:57.651793   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:57.680335   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:57.765655   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:58.141765   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:58.146595   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:58.182744   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:58.266033   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:58.642303   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:58.646295   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:58.680606   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:58.765297   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:59.149389   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:59.158949   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:59.180376   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:59.266788   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:59.642723   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:59.646529   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:59.679841   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:59.765415   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:00.344078   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:00.345906   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:00.346235   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:00.346560   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:00.642289   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:00.646284   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:00.680182   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:00.765192   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:01.149944   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:01.150244   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:01.182627   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:01.267727   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:01.641411   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:01.650940   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:01.684641   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:01.765121   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:02.141568   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:02.146662   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:02.180968   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:02.265413   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:02.642449   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:02.646970   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:02.681136   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:02.765439   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:03.453597   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:03.468694   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:03.468821   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:03.469331   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:03.642132   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:03.645967   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:03.680947   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:03.765762   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:04.142496   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:04.145835   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:04.180627   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:04.266328   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:04.642584   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:04.647185   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:04.679612   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:04.766097   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:05.142491   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:05.145837   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:05.180643   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:05.265618   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:05.643236   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:05.647598   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:05.680905   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:05.767771   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:06.142622   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:06.146661   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:06.180035   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:06.267945   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:06.643511   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:06.645819   96181 kapi.go:107] duration metric: took 30.003687026s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 17:35:06.680238   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:06.764611   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:07.142674   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:07.180135   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:07.265298   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:07.641659   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:07.680304   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:07.765826   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:08.141640   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:08.179892   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:08.266283   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:08.641673   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:08.680160   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:08.765394   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:09.142145   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:09.181264   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:09.265756   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:09.642904   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:09.680726   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:09.766968   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:10.142682   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:10.179795   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:10.265809   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:10.642808   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:10.679861   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:10.766280   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:11.145309   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:11.180503   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:11.265328   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:11.642148   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:11.681102   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:11.765401   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:12.145261   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:12.184415   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:12.265067   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:12.646269   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:12.680921   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:12.765545   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:13.142077   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:13.179947   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:13.266380   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:13.648815   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:13.680348   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:13.772316   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:14.141778   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:14.183434   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:14.267831   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:14.642492   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:14.681054   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:14.767405   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:15.142071   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:15.180397   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:15.265421   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:15.641250   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:15.681287   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:15.775225   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:16.142492   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:16.181790   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:16.267318   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:16.649522   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:16.680641   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:16.765647   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:17.143177   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:17.180243   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:17.267616   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:17.642549   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:17.681067   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:17.766047   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:18.141433   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:18.180782   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:18.265518   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:18.641952   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:18.680531   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:18.765334   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:19.141594   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:19.182782   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:19.271071   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:19.642145   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:19.681035   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:19.789295   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:20.141991   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:20.181018   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:20.265265   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:20.642228   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:20.681107   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:20.764842   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:21.142461   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:21.180830   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:21.265307   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:21.641431   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:21.679864   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:21.772918   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:22.142889   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:22.180107   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:22.264754   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:22.642473   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:22.680284   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:22.765684   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:23.141718   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:23.179821   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:23.266085   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:23.642736   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:23.680145   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:23.765225   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:24.142822   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:24.180052   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:24.264695   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:24.647554   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:24.681147   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:24.765062   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:25.141799   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:25.184376   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:25.412853   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:25.815023   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:25.816718   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:25.823066   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:26.146922   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:26.181724   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:26.272985   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:26.649030   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:26.681924   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:26.770149   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:27.145187   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:27.186222   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:27.271442   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:27.655970   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:27.699385   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:27.767076   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:28.142799   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:28.181233   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:28.265898   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:28.642260   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:28.680539   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:28.766100   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:29.144697   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:29.181013   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:29.266413   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:29.642698   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:29.681604   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:29.775178   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:30.143037   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:30.187310   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:30.269140   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:30.644265   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:30.680902   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:30.773928   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:31.141376   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:31.179576   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:31.265488   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:31.642794   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:31.682983   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:31.765572   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:32.213013   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:32.221393   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:32.265827   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:32.642235   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:32.680993   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:32.766247   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:33.143765   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:33.180482   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:33.265684   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:33.642411   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:33.679628   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:33.765310   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:34.142004   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:34.180646   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:34.265523   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:34.642099   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:34.680535   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:34.765206   96181 kapi.go:107] duration metric: took 57.005436432s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 17:35:35.141598   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:35.179973   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:35.642480   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:35.680619   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:36.142125   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:36.180337   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:36.642207   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:36.680947   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:37.142740   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:37.180710   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:37.642395   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:37.680302   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:38.141750   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:38.180260   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:38.642990   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:38.680173   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:39.142908   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:39.180605   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:39.641753   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:39.680444   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:40.142046   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:40.180390   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:40.641867   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:40.680204   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:41.142555   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:41.181054   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:41.641868   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:41.680777   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:42.141941   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:42.180415   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:42.644029   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:42.682083   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:43.142873   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:43.180201   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:43.642245   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:43.680750   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:44.145621   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:44.194008   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:44.698450   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:44.698978   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:45.141928   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:45.181304   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:45.641407   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:45.679984   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:46.142280   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:46.180354   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:46.642327   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:46.680775   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:47.142404   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:47.180148   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:47.642590   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:47.687135   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:48.144659   96181 kapi.go:107] duration metric: took 1m11.507233239s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 17:35:48.181780   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:48.680723   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:49.180312   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:49.680765   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:50.181716   96181 kapi.go:107] duration metric: took 1m10.505030968s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 17:35:50.183302   96181 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-145541 cluster.
	I0729 17:35:50.184505   96181 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 17:35:50.185759   96181 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 17:35:50.186964   96181 out.go:177] * Enabled addons: storage-provisioner, metrics-server, inspektor-gadget, cloud-spanner, helm-tiller, nvidia-device-plugin, yakd, ingress-dns, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 17:35:50.188081   96181 addons.go:510] duration metric: took 1m22.524677806s for enable addons: enabled=[storage-provisioner metrics-server inspektor-gadget cloud-spanner helm-tiller nvidia-device-plugin yakd ingress-dns default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 17:35:50.188116   96181 start.go:246] waiting for cluster config update ...
	I0729 17:35:50.188134   96181 start.go:255] writing updated cluster config ...
	I0729 17:35:50.188361   96181 ssh_runner.go:195] Run: rm -f paused
	I0729 17:35:50.239638   96181 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 17:35:50.241190   96181 out.go:177] * Done! kubectl is now configured to use "addons-145541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.723444765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274749723390208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d68c5b88-e0cc-457c-9835-76429d66d652 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.723870130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e0ea337-3474-4c4e-96d5-0738e07e3717 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.723925489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e0ea337-3474-4c4e-96d5-0738e07e3717 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.724710996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99826bc575a85c1e96d46d7f26df13d5cb2a1b898fc326b32d48a7efd8f02831,PodSandboxId:b6f81bcd48db61778a9dd9460087d5147b3cc4196c0e718f8f0bf3f9d8dc3761,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722274740520038207,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-t9gs9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6444606c-a772-4e9c-b313-7187e9758717,},Annotations:map[string]string{io.kubernetes.container.hash: 38f41b7a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c8d95747fbf6ab117058c4b7438f0171e31e4257572c17ea84b6e432a18d0c,PodSandboxId:9ea04a532b4f2b2058f4213f1ff0c1db9ac2096d0f0bb766a0161f85b92aebd2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722274601285458508,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9cfcd35-b093-46c2-ae44-2c916c5de80b,},Annotations:map[string]string{io.kubernet
es.container.hash: ef45ff76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8deb3bdf34f3796bc8ec204614f887b4343ab93caee7b197db99f1d17370133f,PodSandboxId:8a766dd437481288895dc64c3f05ab8ad95b913c9d8950ef174218fe8ec4c5e9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722274590108060674,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-nvghv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 011cd85f-b07b-46e4-b4bd-1f47b7dc24df,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf92d6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c8df7c06a7c24a7a18e22ca66754de29c01c30cfdcb57f30806e3d575cc724,PodSandboxId:6e4334647505cf853335a0f6cd454ceb570e606fd9cb00f65d9a2fb86165f4f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722274554709310587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8d7938-0a77-4cff-9e81-6455967a4c76,},Annotations:map[string]string{io.kubernetes.container.hash: 9a603d37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ea63b719dad3495b7e61d9549f5f7135fe62ea4e87be089675c856bac8a3bc,PodSandboxId:dd8847eed780e19f2c6a03e4d0b128fb472d80f84bb14c1c710bb8fd5f387e14,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722274521889541702,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2d2m,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e400d01f-50a0-4f78-bd3a-9b0f9c63beab,},Annotations:map[string]string{io.kubernetes.container.hash: 64235ba3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc9a50e0888f75951fe08761bcc8d8754f969c8eb40fc43d90c627bc6c039df,PodSandboxId:6ba28f2ca855b11d63c95a44d18a2a48f3c9af4e1a9170cf5dec8e2b485353dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722274521303787534,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingre
ss-nginx-admission-create-dt9w8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c994c8e9-3934-4e5d-8ac4-e82e4005eae2,},Annotations:map[string]string{io.kubernetes.container.hash: d926646a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473edccf5c58cf2c9f0a63af99d7ed0d95c2ec5507062cdd22dab7e3e693d54,PodSandboxId:581e355dec7028fdbdb7314e552979106a837b66baffba26a126481edc270958,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722274517031656954,Labels:map[string]string{io.kubernetes.contain
er.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-wkvlg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2081ac21-9f79-448b-ae42-1077fae38ef9,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce71f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7001264b698e30818eff99cbeb83a51439e58115fd60a99f663876ceb6e7,PodSandboxId:887eb3b237ee12aa9cccc13ebbe16d1acc46aa8eab81d3d55fd3405f4b17d3b6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1
722274500463481915,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-twcpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729a1011-260e-49bc-9fe9-0f5a13a4f5d7,},Annotations:map[string]string{io.kubernetes.container.hash: 8a02c318,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638d0f3fe4e51cdc6dae5202502b3489349eb03a0433187d89afa4f04258bb0,PodSandboxId:ae659103a1f4af5c82dd9058adbadc9d1989d7dc646f730d1c896b394920d657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274474327653971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd58c8b-201b-433a-917f-1382e5a8fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 13db8ad4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f0536849e4d33ef5404799f1da0c33720395f13b1a5e639f3cdc93f8bff5e8,PodSandboxId:2989fd014413395f0ff80529c0cefc21fe42f921b6c62504d05c51a46df43c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a67
4fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274471486750466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dfrfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7f3dfc-f445-447d-8b0f-f9768984eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 5355ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9a7cd1c02e6cfdb6d03cbb5630580a6a0eafef14ebdbae1a480ee236727e02,PodSandboxId:5d4162abe009f4c97046ae8539bd5ef003e3343df2f7d2941a1cc5263d691835,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},I
mage:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274468856613349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6sd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a80c5a1-59ca-4e68-b237-5e7e03f8c23e,},Annotations:map[string]string{io.kubernetes.container.hash: d5cf5e75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954122fb41ccc0077861429e96b58887533974d1bc428b25958d6a47ceda9b78,PodSandboxId:f823acb8f6efe7e1f3e963afed288e21bc166e055f7c714360ecae2f989e96eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Imag
e:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274449044766573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83fe687db118bee4366eb90c9db19856,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905e9468d35c61aa82eeec9e1d23a5f0b7a51d206140d497222588fd6846d273,PodSandboxId:b5704093a7247f101b286297047314216bef4d86693be84020998c11672e1615,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Ima
ge:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274449015587187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dce093dfe1a1321ec268f1e97babb4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a81e90aed1439aa9dbccf7dec2273f8264cb1879c7b67c6ac17688632472920,PodSandboxId:eb740979212835e357a0d7a2b7422172cff1a1c49d65450b42492fe408cacc5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274449025150954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d5145776697d68caff9315eb068a51,},Annotations:map[string]string{io.kubernetes.container.hash: f61661de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4e3a799006df5d6b91d4a312bbf6b40762d89407b95a4d33f84e0e3504b98,PodSandboxId:bcb11c13549f7843a069467b69c7682192c250a03b5c3357cc5f80040a177f80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b0
95d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274449021513077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632974a84e8c8f2d87e51d80d84b674c,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e0ea337-3474-4c4e-96d5-0738e07e3717 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.763229273Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f8d2843-42a0-42ca-a0e7-3962ae0233dd name=/runtime.v1.RuntimeService/Version
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.763315571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f8d2843-42a0-42ca-a0e7-3962ae0233dd name=/runtime.v1.RuntimeService/Version
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.764535202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91c5ca92-a674-4e8f-aaa9-80128b4dbece name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.766180391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274749766153873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91c5ca92-a674-4e8f-aaa9-80128b4dbece name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.766774890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd2d94ff-648d-4b38-9d52-cec02ef2190d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.766844881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd2d94ff-648d-4b38-9d52-cec02ef2190d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.767198814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99826bc575a85c1e96d46d7f26df13d5cb2a1b898fc326b32d48a7efd8f02831,PodSandboxId:b6f81bcd48db61778a9dd9460087d5147b3cc4196c0e718f8f0bf3f9d8dc3761,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722274740520038207,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-t9gs9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6444606c-a772-4e9c-b313-7187e9758717,},Annotations:map[string]string{io.kubernetes.container.hash: 38f41b7a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c8d95747fbf6ab117058c4b7438f0171e31e4257572c17ea84b6e432a18d0c,PodSandboxId:9ea04a532b4f2b2058f4213f1ff0c1db9ac2096d0f0bb766a0161f85b92aebd2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722274601285458508,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9cfcd35-b093-46c2-ae44-2c916c5de80b,},Annotations:map[string]string{io.kubernet
es.container.hash: ef45ff76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8deb3bdf34f3796bc8ec204614f887b4343ab93caee7b197db99f1d17370133f,PodSandboxId:8a766dd437481288895dc64c3f05ab8ad95b913c9d8950ef174218fe8ec4c5e9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722274590108060674,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-nvghv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 011cd85f-b07b-46e4-b4bd-1f47b7dc24df,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf92d6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c8df7c06a7c24a7a18e22ca66754de29c01c30cfdcb57f30806e3d575cc724,PodSandboxId:6e4334647505cf853335a0f6cd454ceb570e606fd9cb00f65d9a2fb86165f4f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722274554709310587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8d7938-0a77-4cff-9e81-6455967a4c76,},Annotations:map[string]string{io.kubernetes.container.hash: 9a603d37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ea63b719dad3495b7e61d9549f5f7135fe62ea4e87be089675c856bac8a3bc,PodSandboxId:dd8847eed780e19f2c6a03e4d0b128fb472d80f84bb14c1c710bb8fd5f387e14,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722274521889541702,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2d2m,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e400d01f-50a0-4f78-bd3a-9b0f9c63beab,},Annotations:map[string]string{io.kubernetes.container.hash: 64235ba3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc9a50e0888f75951fe08761bcc8d8754f969c8eb40fc43d90c627bc6c039df,PodSandboxId:6ba28f2ca855b11d63c95a44d18a2a48f3c9af4e1a9170cf5dec8e2b485353dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722274521303787534,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingre
ss-nginx-admission-create-dt9w8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c994c8e9-3934-4e5d-8ac4-e82e4005eae2,},Annotations:map[string]string{io.kubernetes.container.hash: d926646a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473edccf5c58cf2c9f0a63af99d7ed0d95c2ec5507062cdd22dab7e3e693d54,PodSandboxId:581e355dec7028fdbdb7314e552979106a837b66baffba26a126481edc270958,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722274517031656954,Labels:map[string]string{io.kubernetes.contain
er.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-wkvlg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2081ac21-9f79-448b-ae42-1077fae38ef9,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce71f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7001264b698e30818eff99cbeb83a51439e58115fd60a99f663876ceb6e7,PodSandboxId:887eb3b237ee12aa9cccc13ebbe16d1acc46aa8eab81d3d55fd3405f4b17d3b6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1
722274500463481915,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-twcpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729a1011-260e-49bc-9fe9-0f5a13a4f5d7,},Annotations:map[string]string{io.kubernetes.container.hash: 8a02c318,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638d0f3fe4e51cdc6dae5202502b3489349eb03a0433187d89afa4f04258bb0,PodSandboxId:ae659103a1f4af5c82dd9058adbadc9d1989d7dc646f730d1c896b394920d657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274474327653971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd58c8b-201b-433a-917f-1382e5a8fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 13db8ad4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f0536849e4d33ef5404799f1da0c33720395f13b1a5e639f3cdc93f8bff5e8,PodSandboxId:2989fd014413395f0ff80529c0cefc21fe42f921b6c62504d05c51a46df43c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a67
4fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274471486750466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dfrfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7f3dfc-f445-447d-8b0f-f9768984eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 5355ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9a7cd1c02e6cfdb6d03cbb5630580a6a0eafef14ebdbae1a480ee236727e02,PodSandboxId:5d4162abe009f4c97046ae8539bd5ef003e3343df2f7d2941a1cc5263d691835,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},I
mage:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274468856613349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6sd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a80c5a1-59ca-4e68-b237-5e7e03f8c23e,},Annotations:map[string]string{io.kubernetes.container.hash: d5cf5e75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954122fb41ccc0077861429e96b58887533974d1bc428b25958d6a47ceda9b78,PodSandboxId:f823acb8f6efe7e1f3e963afed288e21bc166e055f7c714360ecae2f989e96eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Imag
e:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274449044766573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83fe687db118bee4366eb90c9db19856,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905e9468d35c61aa82eeec9e1d23a5f0b7a51d206140d497222588fd6846d273,PodSandboxId:b5704093a7247f101b286297047314216bef4d86693be84020998c11672e1615,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Ima
ge:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274449015587187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dce093dfe1a1321ec268f1e97babb4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a81e90aed1439aa9dbccf7dec2273f8264cb1879c7b67c6ac17688632472920,PodSandboxId:eb740979212835e357a0d7a2b7422172cff1a1c49d65450b42492fe408cacc5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274449025150954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d5145776697d68caff9315eb068a51,},Annotations:map[string]string{io.kubernetes.container.hash: f61661de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4e3a799006df5d6b91d4a312bbf6b40762d89407b95a4d33f84e0e3504b98,PodSandboxId:bcb11c13549f7843a069467b69c7682192c250a03b5c3357cc5f80040a177f80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b0
95d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274449021513077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632974a84e8c8f2d87e51d80d84b674c,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd2d94ff-648d-4b38-9d52-cec02ef2190d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.804657165Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdd48bbd-88f7-4603-9feb-ddb023d5860f name=/runtime.v1.RuntimeService/Version
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.804873329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdd48bbd-88f7-4603-9feb-ddb023d5860f name=/runtime.v1.RuntimeService/Version
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.805871189Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfc97da7-0ef3-4565-9ca2-1694e27f250e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.807064630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274749807040076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfc97da7-0ef3-4565-9ca2-1694e27f250e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.807608619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c74f2522-8e62-453d-890b-240fd85c83df name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.807677513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c74f2522-8e62-453d-890b-240fd85c83df name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.808509438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99826bc575a85c1e96d46d7f26df13d5cb2a1b898fc326b32d48a7efd8f02831,PodSandboxId:b6f81bcd48db61778a9dd9460087d5147b3cc4196c0e718f8f0bf3f9d8dc3761,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722274740520038207,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-t9gs9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6444606c-a772-4e9c-b313-7187e9758717,},Annotations:map[string]string{io.kubernetes.container.hash: 38f41b7a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c8d95747fbf6ab117058c4b7438f0171e31e4257572c17ea84b6e432a18d0c,PodSandboxId:9ea04a532b4f2b2058f4213f1ff0c1db9ac2096d0f0bb766a0161f85b92aebd2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722274601285458508,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9cfcd35-b093-46c2-ae44-2c916c5de80b,},Annotations:map[string]string{io.kubernet
es.container.hash: ef45ff76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8deb3bdf34f3796bc8ec204614f887b4343ab93caee7b197db99f1d17370133f,PodSandboxId:8a766dd437481288895dc64c3f05ab8ad95b913c9d8950ef174218fe8ec4c5e9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722274590108060674,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-nvghv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 011cd85f-b07b-46e4-b4bd-1f47b7dc24df,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf92d6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c8df7c06a7c24a7a18e22ca66754de29c01c30cfdcb57f30806e3d575cc724,PodSandboxId:6e4334647505cf853335a0f6cd454ceb570e606fd9cb00f65d9a2fb86165f4f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722274554709310587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8d7938-0a77-4cff-9e81-6455967a4c76,},Annotations:map[string]string{io.kubernetes.container.hash: 9a603d37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ea63b719dad3495b7e61d9549f5f7135fe62ea4e87be089675c856bac8a3bc,PodSandboxId:dd8847eed780e19f2c6a03e4d0b128fb472d80f84bb14c1c710bb8fd5f387e14,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722274521889541702,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2d2m,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e400d01f-50a0-4f78-bd3a-9b0f9c63beab,},Annotations:map[string]string{io.kubernetes.container.hash: 64235ba3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc9a50e0888f75951fe08761bcc8d8754f969c8eb40fc43d90c627bc6c039df,PodSandboxId:6ba28f2ca855b11d63c95a44d18a2a48f3c9af4e1a9170cf5dec8e2b485353dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722274521303787534,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingre
ss-nginx-admission-create-dt9w8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c994c8e9-3934-4e5d-8ac4-e82e4005eae2,},Annotations:map[string]string{io.kubernetes.container.hash: d926646a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473edccf5c58cf2c9f0a63af99d7ed0d95c2ec5507062cdd22dab7e3e693d54,PodSandboxId:581e355dec7028fdbdb7314e552979106a837b66baffba26a126481edc270958,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722274517031656954,Labels:map[string]string{io.kubernetes.contain
er.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-wkvlg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2081ac21-9f79-448b-ae42-1077fae38ef9,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce71f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7001264b698e30818eff99cbeb83a51439e58115fd60a99f663876ceb6e7,PodSandboxId:887eb3b237ee12aa9cccc13ebbe16d1acc46aa8eab81d3d55fd3405f4b17d3b6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1
722274500463481915,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-twcpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729a1011-260e-49bc-9fe9-0f5a13a4f5d7,},Annotations:map[string]string{io.kubernetes.container.hash: 8a02c318,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638d0f3fe4e51cdc6dae5202502b3489349eb03a0433187d89afa4f04258bb0,PodSandboxId:ae659103a1f4af5c82dd9058adbadc9d1989d7dc646f730d1c896b394920d657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274474327653971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd58c8b-201b-433a-917f-1382e5a8fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 13db8ad4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f0536849e4d33ef5404799f1da0c33720395f13b1a5e639f3cdc93f8bff5e8,PodSandboxId:2989fd014413395f0ff80529c0cefc21fe42f921b6c62504d05c51a46df43c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a67
4fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274471486750466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dfrfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7f3dfc-f445-447d-8b0f-f9768984eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 5355ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9a7cd1c02e6cfdb6d03cbb5630580a6a0eafef14ebdbae1a480ee236727e02,PodSandboxId:5d4162abe009f4c97046ae8539bd5ef003e3343df2f7d2941a1cc5263d691835,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},I
mage:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274468856613349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6sd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a80c5a1-59ca-4e68-b237-5e7e03f8c23e,},Annotations:map[string]string{io.kubernetes.container.hash: d5cf5e75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954122fb41ccc0077861429e96b58887533974d1bc428b25958d6a47ceda9b78,PodSandboxId:f823acb8f6efe7e1f3e963afed288e21bc166e055f7c714360ecae2f989e96eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Imag
e:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274449044766573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83fe687db118bee4366eb90c9db19856,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905e9468d35c61aa82eeec9e1d23a5f0b7a51d206140d497222588fd6846d273,PodSandboxId:b5704093a7247f101b286297047314216bef4d86693be84020998c11672e1615,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Ima
ge:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274449015587187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dce093dfe1a1321ec268f1e97babb4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a81e90aed1439aa9dbccf7dec2273f8264cb1879c7b67c6ac17688632472920,PodSandboxId:eb740979212835e357a0d7a2b7422172cff1a1c49d65450b42492fe408cacc5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274449025150954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d5145776697d68caff9315eb068a51,},Annotations:map[string]string{io.kubernetes.container.hash: f61661de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4e3a799006df5d6b91d4a312bbf6b40762d89407b95a4d33f84e0e3504b98,PodSandboxId:bcb11c13549f7843a069467b69c7682192c250a03b5c3357cc5f80040a177f80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b0
95d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274449021513077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632974a84e8c8f2d87e51d80d84b674c,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c74f2522-8e62-453d-890b-240fd85c83df name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.841798807Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d51dfc07-08d8-4ad0-9f89-77613ba6005f name=/runtime.v1.RuntimeService/Version
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.841880799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d51dfc07-08d8-4ad0-9f89-77613ba6005f name=/runtime.v1.RuntimeService/Version
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.842809045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8471616-1ad0-48b9-8f30-8d66fea78571 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.848430060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274749848399315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8471616-1ad0-48b9-8f30-8d66fea78571 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.851399670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=809d9008-edd8-4aef-a8b6-3df46b269ad4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.851480347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=809d9008-edd8-4aef-a8b6-3df46b269ad4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:39:09 addons-145541 crio[688]: time="2024-07-29 17:39:09.851788280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99826bc575a85c1e96d46d7f26df13d5cb2a1b898fc326b32d48a7efd8f02831,PodSandboxId:b6f81bcd48db61778a9dd9460087d5147b3cc4196c0e718f8f0bf3f9d8dc3761,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722274740520038207,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-t9gs9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6444606c-a772-4e9c-b313-7187e9758717,},Annotations:map[string]string{io.kubernetes.container.hash: 38f41b7a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c8d95747fbf6ab117058c4b7438f0171e31e4257572c17ea84b6e432a18d0c,PodSandboxId:9ea04a532b4f2b2058f4213f1ff0c1db9ac2096d0f0bb766a0161f85b92aebd2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722274601285458508,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9cfcd35-b093-46c2-ae44-2c916c5de80b,},Annotations:map[string]string{io.kubernet
es.container.hash: ef45ff76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8deb3bdf34f3796bc8ec204614f887b4343ab93caee7b197db99f1d17370133f,PodSandboxId:8a766dd437481288895dc64c3f05ab8ad95b913c9d8950ef174218fe8ec4c5e9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722274590108060674,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-nvghv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 011cd85f-b07b-46e4-b4bd-1f47b7dc24df,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf92d6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c8df7c06a7c24a7a18e22ca66754de29c01c30cfdcb57f30806e3d575cc724,PodSandboxId:6e4334647505cf853335a0f6cd454ceb570e606fd9cb00f65d9a2fb86165f4f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722274554709310587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8d7938-0a77-4cff-9e81-6455967a4c76,},Annotations:map[string]string{io.kubernetes.container.hash: 9a603d37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ea63b719dad3495b7e61d9549f5f7135fe62ea4e87be089675c856bac8a3bc,PodSandboxId:dd8847eed780e19f2c6a03e4d0b128fb472d80f84bb14c1c710bb8fd5f387e14,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722274521889541702,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2d2m,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e400d01f-50a0-4f78-bd3a-9b0f9c63beab,},Annotations:map[string]string{io.kubernetes.container.hash: 64235ba3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc9a50e0888f75951fe08761bcc8d8754f969c8eb40fc43d90c627bc6c039df,PodSandboxId:6ba28f2ca855b11d63c95a44d18a2a48f3c9af4e1a9170cf5dec8e2b485353dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722274521303787534,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingre
ss-nginx-admission-create-dt9w8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c994c8e9-3934-4e5d-8ac4-e82e4005eae2,},Annotations:map[string]string{io.kubernetes.container.hash: d926646a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473edccf5c58cf2c9f0a63af99d7ed0d95c2ec5507062cdd22dab7e3e693d54,PodSandboxId:581e355dec7028fdbdb7314e552979106a837b66baffba26a126481edc270958,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722274517031656954,Labels:map[string]string{io.kubernetes.contain
er.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-wkvlg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2081ac21-9f79-448b-ae42-1077fae38ef9,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce71f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7001264b698e30818eff99cbeb83a51439e58115fd60a99f663876ceb6e7,PodSandboxId:887eb3b237ee12aa9cccc13ebbe16d1acc46aa8eab81d3d55fd3405f4b17d3b6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1
722274500463481915,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-twcpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729a1011-260e-49bc-9fe9-0f5a13a4f5d7,},Annotations:map[string]string{io.kubernetes.container.hash: 8a02c318,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638d0f3fe4e51cdc6dae5202502b3489349eb03a0433187d89afa4f04258bb0,PodSandboxId:ae659103a1f4af5c82dd9058adbadc9d1989d7dc646f730d1c896b394920d657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274474327653971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd58c8b-201b-433a-917f-1382e5a8fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 13db8ad4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f0536849e4d33ef5404799f1da0c33720395f13b1a5e639f3cdc93f8bff5e8,PodSandboxId:2989fd014413395f0ff80529c0cefc21fe42f921b6c62504d05c51a46df43c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a67
4fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274471486750466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dfrfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7f3dfc-f445-447d-8b0f-f9768984eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 5355ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9a7cd1c02e6cfdb6d03cbb5630580a6a0eafef14ebdbae1a480ee236727e02,PodSandboxId:5d4162abe009f4c97046ae8539bd5ef003e3343df2f7d2941a1cc5263d691835,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},I
mage:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274468856613349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6sd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a80c5a1-59ca-4e68-b237-5e7e03f8c23e,},Annotations:map[string]string{io.kubernetes.container.hash: d5cf5e75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954122fb41ccc0077861429e96b58887533974d1bc428b25958d6a47ceda9b78,PodSandboxId:f823acb8f6efe7e1f3e963afed288e21bc166e055f7c714360ecae2f989e96eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Imag
e:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274449044766573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83fe687db118bee4366eb90c9db19856,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905e9468d35c61aa82eeec9e1d23a5f0b7a51d206140d497222588fd6846d273,PodSandboxId:b5704093a7247f101b286297047314216bef4d86693be84020998c11672e1615,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Ima
ge:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274449015587187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dce093dfe1a1321ec268f1e97babb4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a81e90aed1439aa9dbccf7dec2273f8264cb1879c7b67c6ac17688632472920,PodSandboxId:eb740979212835e357a0d7a2b7422172cff1a1c49d65450b42492fe408cacc5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788
eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274449025150954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d5145776697d68caff9315eb068a51,},Annotations:map[string]string{io.kubernetes.container.hash: f61661de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4e3a799006df5d6b91d4a312bbf6b40762d89407b95a4d33f84e0e3504b98,PodSandboxId:bcb11c13549f7843a069467b69c7682192c250a03b5c3357cc5f80040a177f80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b0
95d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274449021513077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632974a84e8c8f2d87e51d80d84b674c,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=809d9008-edd8-4aef-a8b6-3df46b269ad4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	99826bc575a85       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   b6f81bcd48db6       hello-world-app-6778b5fc9f-t9gs9
	97c8d95747fbf       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   9ea04a532b4f2       nginx
	8deb3bdf34f37       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   8a766dd437481       headlamp-7867546754-nvghv
	61c8df7c06a7c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   6e4334647505c       busybox
	b8ea63b719dad       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             3 minutes ago       Exited              patch                     1                   dd8847eed780e       ingress-nginx-admission-patch-s2d2m
	4fc9a50e0888f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   6ba28f2ca855b       ingress-nginx-admission-create-dt9w8
	1473edccf5c58       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   581e355dec702       local-path-provisioner-8d985888d-wkvlg
	7f5c7001264b6       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   887eb3b237ee1       metrics-server-c59844bb4-twcpr
	2638d0f3fe4e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   ae659103a1f4a       storage-provisioner
	28f0536849e4d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   2989fd0144133       coredns-7db6d8ff4d-dfrfm
	db9a7cd1c02e6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   5d4162abe009f       kube-proxy-v6sd2
	954122fb41ccc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             5 minutes ago       Running             kube-controller-manager   0                   f823acb8f6efe       kube-controller-manager-addons-145541
	9a81e90aed143       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   eb74097921283       etcd-addons-145541
	b3b4e3a799006       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             5 minutes ago       Running             kube-apiserver            0                   bcb11c13549f7       kube-apiserver-addons-145541
	905e9468d35c6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             5 minutes ago       Running             kube-scheduler            0                   b5704093a7247       kube-scheduler-addons-145541
	
	
	==> coredns [28f0536849e4d33ef5404799f1da0c33720395f13b1a5e639f3cdc93f8bff5e8] <==
	[INFO] 10.244.0.7:45762 - 47565 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149574s
	[INFO] 10.244.0.7:59178 - 42865 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000120837s
	[INFO] 10.244.0.7:59178 - 9331 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094091s
	[INFO] 10.244.0.7:47171 - 15728 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079648s
	[INFO] 10.244.0.7:47171 - 52595 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070794s
	[INFO] 10.244.0.7:39981 - 20795 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085211s
	[INFO] 10.244.0.7:39981 - 58937 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00009273s
	[INFO] 10.244.0.7:59309 - 36416 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100635s
	[INFO] 10.244.0.7:59309 - 47685 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198172s
	[INFO] 10.244.0.7:57263 - 403 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066491s
	[INFO] 10.244.0.7:57263 - 2193 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049571s
	[INFO] 10.244.0.7:45217 - 4903 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042541s
	[INFO] 10.244.0.7:45217 - 20265 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003672s
	[INFO] 10.244.0.7:44300 - 40164 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000044674s
	[INFO] 10.244.0.7:44300 - 51941 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000051681s
	[INFO] 10.244.0.22:33933 - 860 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000406818s
	[INFO] 10.244.0.22:32793 - 10821 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142403s
	[INFO] 10.244.0.22:49708 - 53449 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00007366s
	[INFO] 10.244.0.22:45962 - 57244 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000065737s
	[INFO] 10.244.0.22:54506 - 20102 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134306s
	[INFO] 10.244.0.22:35808 - 40225 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000503186s
	[INFO] 10.244.0.22:35747 - 43108 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00113837s
	[INFO] 10.244.0.22:60879 - 34246 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00141598s
	[INFO] 10.244.0.27:44390 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000355043s
	[INFO] 10.244.0.27:36489 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091784s
	
	
	==> describe nodes <==
	Name:               addons-145541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-145541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=addons-145541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_34_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-145541
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:34:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-145541
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:39:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:36:47 +0000   Mon, 29 Jul 2024 17:34:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:36:47 +0000   Mon, 29 Jul 2024 17:34:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:36:47 +0000   Mon, 29 Jul 2024 17:34:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:36:47 +0000   Mon, 29 Jul 2024 17:34:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.242
	  Hostname:    addons-145541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ed693bef4bfa479e8fe75e2f6aa79535
	  System UUID:                ed693bef-4bfa-479e-8fe7-5e2f6aa79535
	  Boot ID:                    eee4bf92-69a5-4e92-84eb-0f893b86c8cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  default                     hello-world-app-6778b5fc9f-t9gs9          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  headlamp                    headlamp-7867546754-nvghv                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-7db6d8ff4d-dfrfm                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m42s
	  kube-system                 etcd-addons-145541                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-apiserver-addons-145541              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-controller-manager-addons-145541     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-proxy-v6sd2                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-scheduler-addons-145541              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 metrics-server-c59844bb4-twcpr            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m37s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  local-path-storage          local-path-provisioner-8d985888d-wkvlg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m40s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node addons-145541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node addons-145541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node addons-145541 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m56s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m56s                kubelet          Node addons-145541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s                kubelet          Node addons-145541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s                kubelet          Node addons-145541 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m55s                kubelet          Node addons-145541 status is now: NodeReady
	  Normal  RegisteredNode           4m44s                node-controller  Node addons-145541 event: Registered Node addons-145541 in Controller
	
	
	==> dmesg <==
	[  +0.149328] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.092602] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.073101] kauditd_printk_skb: 136 callbacks suppressed
	[  +8.223655] kauditd_printk_skb: 77 callbacks suppressed
	[ +11.941652] kauditd_printk_skb: 2 callbacks suppressed
	[Jul29 17:35] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.115820] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.718757] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.083402] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.145739] kauditd_printk_skb: 69 callbacks suppressed
	[ +11.339123] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.001058] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.863867] kauditd_printk_skb: 48 callbacks suppressed
	[Jul29 17:36] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.781492] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.054439] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.658235] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.000536] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.972500] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.080847] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.188249] kauditd_printk_skb: 30 callbacks suppressed
	[Jul29 17:37] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.295843] kauditd_printk_skb: 33 callbacks suppressed
	[Jul29 17:38] kauditd_printk_skb: 6 callbacks suppressed
	[Jul29 17:39] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9a81e90aed1439aa9dbccf7dec2273f8264cb1879c7b67c6ac17688632472920] <==
	{"level":"warn","ts":"2024-07-29T17:35:44.614315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.117649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"warn","ts":"2024-07-29T17:35:44.614329Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.10821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T17:35:44.614336Z","caller":"traceutil/trace.go:171","msg":"trace[379648445] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1156; }","duration":"163.1565ms","start":"2024-07-29T17:35:44.451173Z","end":"2024-07-29T17:35:44.61433Z","steps":["trace[379648445] 'agreement among raft nodes before linearized reading'  (duration: 163.062149ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:35:44.614343Z","caller":"traceutil/trace.go:171","msg":"trace[795194796] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1156; }","duration":"164.151582ms","start":"2024-07-29T17:35:44.450187Z","end":"2024-07-29T17:35:44.614339Z","steps":["trace[795194796] 'agreement among raft nodes before linearized reading'  (duration: 164.12639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:35:44.614449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.545007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"warn","ts":"2024-07-29T17:35:44.614462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.097407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-07-29T17:35:44.614464Z","caller":"traceutil/trace.go:171","msg":"trace[1827532928] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1156; }","duration":"363.580715ms","start":"2024-07-29T17:35:44.250878Z","end":"2024-07-29T17:35:44.614459Z","steps":["trace[1827532928] 'agreement among raft nodes before linearized reading'  (duration: 363.522703ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:35:44.614477Z","caller":"traceutil/trace.go:171","msg":"trace[84502644] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1156; }","duration":"244.127169ms","start":"2024-07-29T17:35:44.370344Z","end":"2024-07-29T17:35:44.614471Z","steps":["trace[84502644] 'agreement among raft nodes before linearized reading'  (duration: 244.075364ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:35:44.614478Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:35:44.250866Z","time spent":"363.609178ms","remote":"127.0.0.1:45306","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-07-29T17:35:46.883415Z","caller":"traceutil/trace.go:171","msg":"trace[654725735] transaction","detail":"{read_only:false; response_revision:1164; number_of_response:1; }","duration":"188.696799ms","start":"2024-07-29T17:35:46.694648Z","end":"2024-07-29T17:35:46.883344Z","steps":["trace[654725735] 'process raft request'  (duration: 188.465205ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:36:23.174721Z","caller":"traceutil/trace.go:171","msg":"trace[167230728] transaction","detail":"{read_only:false; response_revision:1441; number_of_response:1; }","duration":"102.262614ms","start":"2024-07-29T17:36:23.072417Z","end":"2024-07-29T17:36:23.17468Z","steps":["trace[167230728] 'process raft request'  (duration: 101.912495ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:36:29.406158Z","caller":"traceutil/trace.go:171","msg":"trace[1434087103] linearizableReadLoop","detail":"{readStateIndex:1572; appliedIndex:1571; }","duration":"193.483273ms","start":"2024-07-29T17:36:29.212639Z","end":"2024-07-29T17:36:29.406122Z","steps":["trace[1434087103] 'read index received'  (duration: 193.333784ms)","trace[1434087103] 'applied index is now lower than readState.Index'  (duration: 148.979µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T17:36:29.406326Z","caller":"traceutil/trace.go:171","msg":"trace[820859752] transaction","detail":"{read_only:false; response_revision:1517; number_of_response:1; }","duration":"204.951568ms","start":"2024-07-29T17:36:29.201354Z","end":"2024-07-29T17:36:29.406306Z","steps":["trace[820859752] 'process raft request'  (duration: 204.658729ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:36:29.406383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.659076ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshots0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T17:36:29.406418Z","caller":"traceutil/trace.go:171","msg":"trace[1250374592] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshots0; response_count:0; response_revision:1517; }","duration":"193.772906ms","start":"2024-07-29T17:36:29.212635Z","end":"2024-07-29T17:36:29.406407Z","steps":["trace[1250374592] 'agreement among raft nodes before linearized reading'  (duration: 193.643581ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:36:30.001208Z","caller":"traceutil/trace.go:171","msg":"trace[1357907197] linearizableReadLoop","detail":"{readStateIndex:1573; appliedIndex:1572; }","duration":"213.716644ms","start":"2024-07-29T17:36:29.787424Z","end":"2024-07-29T17:36:30.001141Z","steps":["trace[1357907197] 'read index received'  (duration: 213.303972ms)","trace[1357907197] 'applied index is now lower than readState.Index'  (duration: 412.126µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T17:36:30.002192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.751593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-29T17:36:30.002275Z","caller":"traceutil/trace.go:171","msg":"trace[1059047934] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1518; }","duration":"214.843427ms","start":"2024-07-29T17:36:29.787405Z","end":"2024-07-29T17:36:30.002249Z","steps":["trace[1059047934] 'agreement among raft nodes before linearized reading'  (duration: 214.716163ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:36:30.002537Z","caller":"traceutil/trace.go:171","msg":"trace[721338687] transaction","detail":"{read_only:false; response_revision:1518; number_of_response:1; }","duration":"217.160293ms","start":"2024-07-29T17:36:29.785366Z","end":"2024-07-29T17:36:30.002527Z","steps":["trace[721338687] 'process raft request'  (duration: 215.554662ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:36:58.937403Z","caller":"traceutil/trace.go:171","msg":"trace[982322811] linearizableReadLoop","detail":"{readStateIndex:1777; appliedIndex:1776; }","duration":"328.886281ms","start":"2024-07-29T17:36:58.608497Z","end":"2024-07-29T17:36:58.937383Z","steps":["trace[982322811] 'read index received'  (duration: 327.139757ms)","trace[982322811] 'applied index is now lower than readState.Index'  (duration: 1.745793ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T17:36:58.937597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.037039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/task-pv-pod-restore\" ","response":"range_response_count:1 size:2854"}
	{"level":"info","ts":"2024-07-29T17:36:58.93768Z","caller":"traceutil/trace.go:171","msg":"trace[223559889] range","detail":"{range_begin:/registry/pods/default/task-pv-pod-restore; range_end:; response_count:1; response_revision:1713; }","duration":"329.199521ms","start":"2024-07-29T17:36:58.608471Z","end":"2024-07-29T17:36:58.937671Z","steps":["trace[223559889] 'agreement among raft nodes before linearized reading'  (duration: 328.973584ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:36:58.937717Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:36:58.608459Z","time spent":"329.243464ms","remote":"127.0.0.1:45310","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":2877,"request content":"key:\"/registry/pods/default/task-pv-pod-restore\" "}
	{"level":"warn","ts":"2024-07-29T17:36:58.93773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.952813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:1594"}
	{"level":"info","ts":"2024-07-29T17:36:58.937832Z","caller":"traceutil/trace.go:171","msg":"trace[811627678] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1713; }","duration":"287.08254ms","start":"2024-07-29T17:36:58.650741Z","end":"2024-07-29T17:36:58.937823Z","steps":["trace[811627678] 'agreement among raft nodes before linearized reading'  (duration: 286.849665ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:39:10 up 5 min,  0 users,  load average: 1.20, 1.35, 0.69
	Linux addons-145541 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b3b4e3a799006df5d6b91d4a312bbf6b40762d89407b95a4d33f84e0e3504b98] <==
	I0729 17:36:04.882970       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 17:36:05.195504       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0729 17:36:24.867102       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.83.134"}
	I0729 17:36:38.371654       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 17:36:38.556320       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.195.224"}
	I0729 17:36:42.918523       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0729 17:36:43.936039       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 17:36:47.323328       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 17:37:12.198760       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 17:37:12.199051       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 17:37:12.221740       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 17:37:12.221798       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 17:37:12.233376       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 17:37:12.233428       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 17:37:12.264479       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 17:37:12.264622       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 17:37:12.291886       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 17:37:12.292025       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0729 17:37:13.222543       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 17:37:13.292804       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 17:37:13.324328       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0729 17:38:59.383541       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.99.193"}
	E0729 17:39:02.048130       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0729 17:39:04.715104       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0729 17:39:04.720649       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [954122fb41ccc0077861429e96b58887533974d1bc428b25958d6a47ceda9b78] <==
	E0729 17:37:32.018580       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:37:51.510558       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:37:51.510692       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:37:51.933487       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:37:51.933542       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:37:55.997574       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:37:55.997710       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:38:06.597760       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:38:06.597811       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:38:22.887650       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:38:22.887723       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:38:33.047903       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:38:33.047985       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:38:38.092995       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:38:38.093072       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:38:39.554639       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:38:39.554746       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 17:38:59.235409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="37.095447ms"
	I0729 17:38:59.263279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="27.795535ms"
	I0729 17:38:59.263366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="42.536µs"
	I0729 17:39:01.337769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="12.522696ms"
	I0729 17:39:01.337856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="32.012µs"
	I0729 17:39:01.953467       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0729 17:39:01.958019       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0729 17:39:01.964151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="5.546µs"
	
	
	==> kube-proxy [db9a7cd1c02e6cfdb6d03cbb5630580a6a0eafef14ebdbae1a480ee236727e02] <==
	I0729 17:34:29.693388       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:34:29.715076       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.242"]
	I0729 17:34:29.811845       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:34:29.811904       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:34:29.811925       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:34:29.821160       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:34:29.821437       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:34:29.821466       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:34:29.828715       1 config.go:192] "Starting service config controller"
	I0729 17:34:29.828742       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:34:29.828760       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:34:29.828763       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:34:29.829219       1 config.go:319] "Starting node config controller"
	I0729 17:34:29.829225       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:34:29.929075       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 17:34:29.929092       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:34:29.929328       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [905e9468d35c61aa82eeec9e1d23a5f0b7a51d206140d497222588fd6846d273] <==
	W0729 17:34:11.692605       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:34:11.692613       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:34:12.543862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 17:34:12.543913       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 17:34:12.592632       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 17:34:12.592684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 17:34:12.646682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 17:34:12.646722       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 17:34:12.661920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 17:34:12.662015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 17:34:12.693095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 17:34:12.693141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 17:34:12.710887       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:34:12.710973       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:34:12.834281       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 17:34:12.834333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 17:34:12.848157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 17:34:12.848206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 17:34:12.966112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 17:34:12.966159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 17:34:12.987316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 17:34:12.987400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 17:34:12.987462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 17:34:12.987492       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0729 17:34:15.877650       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 17:38:59 addons-145541 kubelet[1278]: I0729 17:38:59.236446    1278 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c479653-5761-44ea-8d45-514170d3db15" containerName="liveness-probe"
	Jul 29 17:38:59 addons-145541 kubelet[1278]: I0729 17:38:59.236477    1278 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c479653-5761-44ea-8d45-514170d3db15" containerName="csi-snapshotter"
	Jul 29 17:38:59 addons-145541 kubelet[1278]: I0729 17:38:59.236570    1278 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e0590b-21fd-47a4-b966-e233f95ad067" containerName="gadget"
	Jul 29 17:38:59 addons-145541 kubelet[1278]: I0729 17:38:59.236601    1278 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c479653-5761-44ea-8d45-514170d3db15" containerName="csi-external-health-monitor-controller"
	Jul 29 17:38:59 addons-145541 kubelet[1278]: I0729 17:38:59.236683    1278 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8391ab-3ece-485a-812b-3821cd2dbbcc" containerName="csi-resizer"
	Jul 29 17:38:59 addons-145541 kubelet[1278]: I0729 17:38:59.281292    1278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52mhk\" (UniqueName: \"kubernetes.io/projected/6444606c-a772-4e9c-b313-7187e9758717-kube-api-access-52mhk\") pod \"hello-world-app-6778b5fc9f-t9gs9\" (UID: \"6444606c-a772-4e9c-b313-7187e9758717\") " pod="default/hello-world-app-6778b5fc9f-t9gs9"
	Jul 29 17:39:00 addons-145541 kubelet[1278]: I0729 17:39:00.301201    1278 scope.go:117] "RemoveContainer" containerID="c8a333965fb26a8581be6ef351f66329977bc582f1f049a34470786ccdc6c4ad"
	Jul 29 17:39:00 addons-145541 kubelet[1278]: I0729 17:39:00.389576    1278 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhphh\" (UniqueName: \"kubernetes.io/projected/dc7be156-e078-4c48-931f-5daba154a3f7-kube-api-access-rhphh\") pod \"dc7be156-e078-4c48-931f-5daba154a3f7\" (UID: \"dc7be156-e078-4c48-931f-5daba154a3f7\") "
	Jul 29 17:39:00 addons-145541 kubelet[1278]: I0729 17:39:00.396428    1278 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7be156-e078-4c48-931f-5daba154a3f7-kube-api-access-rhphh" (OuterVolumeSpecName: "kube-api-access-rhphh") pod "dc7be156-e078-4c48-931f-5daba154a3f7" (UID: "dc7be156-e078-4c48-931f-5daba154a3f7"). InnerVolumeSpecName "kube-api-access-rhphh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 17:39:00 addons-145541 kubelet[1278]: I0729 17:39:00.490875    1278 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rhphh\" (UniqueName: \"kubernetes.io/projected/dc7be156-e078-4c48-931f-5daba154a3f7-kube-api-access-rhphh\") on node \"addons-145541\" DevicePath \"\""
	Jul 29 17:39:01 addons-145541 kubelet[1278]: I0729 17:39:01.345687    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-t9gs9" podStartSLOduration=1.652974011 podStartE2EDuration="2.345650084s" podCreationTimestamp="2024-07-29 17:38:59 +0000 UTC" firstStartedPulling="2024-07-29 17:38:59.814123454 +0000 UTC m=+285.736880286" lastFinishedPulling="2024-07-29 17:39:00.506799539 +0000 UTC m=+286.429556359" observedRunningTime="2024-07-29 17:39:01.32675636 +0000 UTC m=+287.249513199" watchObservedRunningTime="2024-07-29 17:39:01.345650084 +0000 UTC m=+287.268406920"
	Jul 29 17:39:02 addons-145541 kubelet[1278]: I0729 17:39:02.254885    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c994c8e9-3934-4e5d-8ac4-e82e4005eae2" path="/var/lib/kubelet/pods/c994c8e9-3934-4e5d-8ac4-e82e4005eae2/volumes"
	Jul 29 17:39:02 addons-145541 kubelet[1278]: I0729 17:39:02.255362    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7be156-e078-4c48-931f-5daba154a3f7" path="/var/lib/kubelet/pods/dc7be156-e078-4c48-931f-5daba154a3f7/volumes"
	Jul 29 17:39:02 addons-145541 kubelet[1278]: I0729 17:39:02.255788    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e400d01f-50a0-4f78-bd3a-9b0f9c63beab" path="/var/lib/kubelet/pods/e400d01f-50a0-4f78-bd3a-9b0f9c63beab/volumes"
	Jul 29 17:39:05 addons-145541 kubelet[1278]: I0729 17:39:05.231983    1278 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64428\" (UniqueName: \"kubernetes.io/projected/19f38662-68f7-4c9e-b63e-2272dc97f61c-kube-api-access-64428\") pod \"19f38662-68f7-4c9e-b63e-2272dc97f61c\" (UID: \"19f38662-68f7-4c9e-b63e-2272dc97f61c\") "
	Jul 29 17:39:05 addons-145541 kubelet[1278]: I0729 17:39:05.232022    1278 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19f38662-68f7-4c9e-b63e-2272dc97f61c-webhook-cert\") pod \"19f38662-68f7-4c9e-b63e-2272dc97f61c\" (UID: \"19f38662-68f7-4c9e-b63e-2272dc97f61c\") "
	Jul 29 17:39:05 addons-145541 kubelet[1278]: I0729 17:39:05.238110    1278 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19f38662-68f7-4c9e-b63e-2272dc97f61c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "19f38662-68f7-4c9e-b63e-2272dc97f61c" (UID: "19f38662-68f7-4c9e-b63e-2272dc97f61c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 29 17:39:05 addons-145541 kubelet[1278]: I0729 17:39:05.238224    1278 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19f38662-68f7-4c9e-b63e-2272dc97f61c-kube-api-access-64428" (OuterVolumeSpecName: "kube-api-access-64428") pod "19f38662-68f7-4c9e-b63e-2272dc97f61c" (UID: "19f38662-68f7-4c9e-b63e-2272dc97f61c"). InnerVolumeSpecName "kube-api-access-64428". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 17:39:05 addons-145541 kubelet[1278]: I0729 17:39:05.332203    1278 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19f38662-68f7-4c9e-b63e-2272dc97f61c-webhook-cert\") on node \"addons-145541\" DevicePath \"\""
	Jul 29 17:39:05 addons-145541 kubelet[1278]: I0729 17:39:05.332226    1278 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-64428\" (UniqueName: \"kubernetes.io/projected/19f38662-68f7-4c9e-b63e-2272dc97f61c-kube-api-access-64428\") on node \"addons-145541\" DevicePath \"\""
	Jul 29 17:39:05 addons-145541 kubelet[1278]: I0729 17:39:05.332740    1278 scope.go:117] "RemoveContainer" containerID="008c38acc96fbb8c0cdb2df58ac977731161c2c5bc865610402541da0d24e37f"
	Jul 29 17:39:05 addons-145541 kubelet[1278]: I0729 17:39:05.355298    1278 scope.go:117] "RemoveContainer" containerID="008c38acc96fbb8c0cdb2df58ac977731161c2c5bc865610402541da0d24e37f"
	Jul 29 17:39:05 addons-145541 kubelet[1278]: E0729 17:39:05.356047    1278 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"008c38acc96fbb8c0cdb2df58ac977731161c2c5bc865610402541da0d24e37f\": container with ID starting with 008c38acc96fbb8c0cdb2df58ac977731161c2c5bc865610402541da0d24e37f not found: ID does not exist" containerID="008c38acc96fbb8c0cdb2df58ac977731161c2c5bc865610402541da0d24e37f"
	Jul 29 17:39:05 addons-145541 kubelet[1278]: I0729 17:39:05.356078    1278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008c38acc96fbb8c0cdb2df58ac977731161c2c5bc865610402541da0d24e37f"} err="failed to get container status \"008c38acc96fbb8c0cdb2df58ac977731161c2c5bc865610402541da0d24e37f\": rpc error: code = NotFound desc = could not find container \"008c38acc96fbb8c0cdb2df58ac977731161c2c5bc865610402541da0d24e37f\": container with ID starting with 008c38acc96fbb8c0cdb2df58ac977731161c2c5bc865610402541da0d24e37f not found: ID does not exist"
	Jul 29 17:39:06 addons-145541 kubelet[1278]: I0729 17:39:06.255032    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19f38662-68f7-4c9e-b63e-2272dc97f61c" path="/var/lib/kubelet/pods/19f38662-68f7-4c9e-b63e-2272dc97f61c/volumes"
	
	
	==> storage-provisioner [2638d0f3fe4e51cdc6dae5202502b3489349eb03a0433187d89afa4f04258bb0] <==
	I0729 17:34:34.671086       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 17:34:34.699061       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 17:34:34.699134       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 17:34:34.711840       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 17:34:34.712008       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-145541_32f2532d-6af4-413f-ba99-cadabb66aee9!
	I0729 17:34:34.712512       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b9323c94-5488-4f8f-b4e8-f1ec712e35c7", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-145541_32f2532d-6af4-413f-ba99-cadabb66aee9 became leader
	I0729 17:34:34.812533       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-145541_32f2532d-6af4-413f-ba99-cadabb66aee9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-145541 -n addons-145541
helpers_test.go:261: (dbg) Run:  kubectl --context addons-145541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (326.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.647258ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-twcpr" [729a1011-260e-49bc-9fe9-0f5a13a4f5d7] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007234921s
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (93.76583ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 2m1.62461131s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (71.860487ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 2m5.630642167s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (69.127888ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 2m11.130687577s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (62.334458ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 2m17.877892702s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (65.10946ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 2m31.428842098s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (62.030176ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 2m42.541580042s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (63.761096ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 3m15.397871813s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (64.931075ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 3m53.273572459s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (61.173955ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 4m36.534098418s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (62.033114ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 5m8.083349884s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (65.369813ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 6m15.878280905s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-145541 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-145541 top pods -n kube-system: exit status 1 (63.297153ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-dfrfm, age: 7m20.871298978s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-145541 -n addons-145541
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-145541 logs -n 25: (1.260676104s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-330185                                                                     | download-only-330185 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-423519 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | binary-mirror-423519                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45115                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-423519                                                                     | binary-mirror-423519 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| addons  | disable dashboard -p                                                                        | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | addons-145541                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | addons-145541                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-145541 --wait=true                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | addons-145541                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-145541 ssh cat                                                                       | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | /opt/local-path-provisioner/pvc-1e5ae59b-219f-4d33-8e28-ea4906311031_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | -p addons-145541                                                                            |                      |         |         |                     |                     |
	| ip      | addons-145541 ip                                                                            | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	| addons  | enable headlamp                                                                             | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | -p addons-145541                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC | 29 Jul 24 17:36 UTC |
	|         | addons-145541                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-145541 ssh curl -s                                                                   | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-145541 addons                                                                        | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:37 UTC | 29 Jul 24 17:37 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-145541 addons                                                                        | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:37 UTC | 29 Jul 24 17:37 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-145541 ip                                                                            | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:38 UTC | 29 Jul 24 17:38 UTC |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:38 UTC | 29 Jul 24 17:39 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-145541 addons disable                                                                | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:39 UTC | 29 Jul 24 17:39 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-145541 addons                                                                        | addons-145541        | jenkins | v1.33.1 | 29 Jul 24 17:41 UTC | 29 Jul 24 17:41 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:33:31
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:33:31.254351   96181 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:33:31.254589   96181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:33:31.254598   96181 out.go:304] Setting ErrFile to fd 2...
	I0729 17:33:31.254602   96181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:33:31.255151   96181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:33:31.256243   96181 out.go:298] Setting JSON to false
	I0729 17:33:31.257200   96181 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8131,"bootTime":1722266280,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:33:31.257270   96181 start.go:139] virtualization: kvm guest
	I0729 17:33:31.259081   96181 out.go:177] * [addons-145541] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:33:31.260749   96181 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 17:33:31.260801   96181 notify.go:220] Checking for updates...
	I0729 17:33:31.263249   96181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:33:31.264558   96181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:33:31.265678   96181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:33:31.266900   96181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:33:31.268192   96181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:33:31.270076   96181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:33:31.301942   96181 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 17:33:31.303026   96181 start.go:297] selected driver: kvm2
	I0729 17:33:31.303036   96181 start.go:901] validating driver "kvm2" against <nil>
	I0729 17:33:31.303047   96181 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:33:31.303793   96181 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:33:31.303871   96181 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:33:31.318919   96181 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:33:31.318973   96181 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:33:31.319241   96181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:33:31.319307   96181 cni.go:84] Creating CNI manager for ""
	I0729 17:33:31.319324   96181 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 17:33:31.319339   96181 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:33:31.319417   96181 start.go:340] cluster config:
	{Name:addons-145541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-145541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:33:31.319536   96181 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:33:31.321265   96181 out.go:177] * Starting "addons-145541" primary control-plane node in "addons-145541" cluster
	I0729 17:33:31.322476   96181 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:33:31.322514   96181 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:33:31.322525   96181 cache.go:56] Caching tarball of preloaded images
	I0729 17:33:31.322603   96181 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:33:31.322614   96181 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:33:31.322947   96181 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/config.json ...
	I0729 17:33:31.322975   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/config.json: {Name:mk1a0f78a238bdabf9ef6522c2d736b9c116177c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:33:31.323159   96181 start.go:360] acquireMachinesLock for addons-145541: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:33:31.323220   96181 start.go:364] duration metric: took 43.272µs to acquireMachinesLock for "addons-145541"
	I0729 17:33:31.323249   96181 start.go:93] Provisioning new machine with config: &{Name:addons-145541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-145541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:33:31.323323   96181 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 17:33:31.324830   96181 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 17:33:31.324981   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:33:31.325017   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:33:31.339807   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0729 17:33:31.340252   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:33:31.340897   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:33:31.340920   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:33:31.341260   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:33:31.341455   96181 main.go:141] libmachine: (addons-145541) Calling .GetMachineName
	I0729 17:33:31.341584   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:31.341713   96181 start.go:159] libmachine.API.Create for "addons-145541" (driver="kvm2")
	I0729 17:33:31.341740   96181 client.go:168] LocalClient.Create starting
	I0729 17:33:31.341781   96181 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 17:33:31.381294   96181 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 17:33:31.448044   96181 main.go:141] libmachine: Running pre-create checks...
	I0729 17:33:31.448067   96181 main.go:141] libmachine: (addons-145541) Calling .PreCreateCheck
	I0729 17:33:31.448555   96181 main.go:141] libmachine: (addons-145541) Calling .GetConfigRaw
	I0729 17:33:31.448977   96181 main.go:141] libmachine: Creating machine...
	I0729 17:33:31.448989   96181 main.go:141] libmachine: (addons-145541) Calling .Create
	I0729 17:33:31.449151   96181 main.go:141] libmachine: (addons-145541) Creating KVM machine...
	I0729 17:33:31.450356   96181 main.go:141] libmachine: (addons-145541) DBG | found existing default KVM network
	I0729 17:33:31.451054   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:31.450918   96203 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0729 17:33:31.451089   96181 main.go:141] libmachine: (addons-145541) DBG | created network xml: 
	I0729 17:33:31.451104   96181 main.go:141] libmachine: (addons-145541) DBG | <network>
	I0729 17:33:31.451113   96181 main.go:141] libmachine: (addons-145541) DBG |   <name>mk-addons-145541</name>
	I0729 17:33:31.451120   96181 main.go:141] libmachine: (addons-145541) DBG |   <dns enable='no'/>
	I0729 17:33:31.451132   96181 main.go:141] libmachine: (addons-145541) DBG |   
	I0729 17:33:31.451139   96181 main.go:141] libmachine: (addons-145541) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 17:33:31.451144   96181 main.go:141] libmachine: (addons-145541) DBG |     <dhcp>
	I0729 17:33:31.451149   96181 main.go:141] libmachine: (addons-145541) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 17:33:31.451155   96181 main.go:141] libmachine: (addons-145541) DBG |     </dhcp>
	I0729 17:33:31.451159   96181 main.go:141] libmachine: (addons-145541) DBG |   </ip>
	I0729 17:33:31.451171   96181 main.go:141] libmachine: (addons-145541) DBG |   
	I0729 17:33:31.451182   96181 main.go:141] libmachine: (addons-145541) DBG | </network>
	I0729 17:33:31.451191   96181 main.go:141] libmachine: (addons-145541) DBG | 
	I0729 17:33:31.456308   96181 main.go:141] libmachine: (addons-145541) DBG | trying to create private KVM network mk-addons-145541 192.168.39.0/24...
	I0729 17:33:31.521417   96181 main.go:141] libmachine: (addons-145541) DBG | private KVM network mk-addons-145541 192.168.39.0/24 created
	I0729 17:33:31.521450   96181 main.go:141] libmachine: (addons-145541) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541 ...
	I0729 17:33:31.521462   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:31.521384   96203 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:33:31.521475   96181 main.go:141] libmachine: (addons-145541) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 17:33:31.521639   96181 main.go:141] libmachine: (addons-145541) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 17:33:31.764637   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:31.764471   96203 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa...
	I0729 17:33:31.957899   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:31.957722   96203 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/addons-145541.rawdisk...
	I0729 17:33:31.957944   96181 main.go:141] libmachine: (addons-145541) DBG | Writing magic tar header
	I0729 17:33:31.957962   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541 (perms=drwx------)
	I0729 17:33:31.957978   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:33:31.957985   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 17:33:31.957999   96181 main.go:141] libmachine: (addons-145541) DBG | Writing SSH key tar header
	I0729 17:33:31.958009   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 17:33:31.958022   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:31.957838   96203 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541 ...
	I0729 17:33:31.958033   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541
	I0729 17:33:31.958043   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:33:31.958056   96181 main.go:141] libmachine: (addons-145541) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:33:31.958062   96181 main.go:141] libmachine: (addons-145541) Creating domain...
	I0729 17:33:31.958072   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 17:33:31.958077   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:33:31.958090   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 17:33:31.958101   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:33:31.958110   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:33:31.958127   96181 main.go:141] libmachine: (addons-145541) DBG | Checking permissions on dir: /home
	I0729 17:33:31.958141   96181 main.go:141] libmachine: (addons-145541) DBG | Skipping /home - not owner
	I0729 17:33:31.959213   96181 main.go:141] libmachine: (addons-145541) define libvirt domain using xml: 
	I0729 17:33:31.959239   96181 main.go:141] libmachine: (addons-145541) <domain type='kvm'>
	I0729 17:33:31.959248   96181 main.go:141] libmachine: (addons-145541)   <name>addons-145541</name>
	I0729 17:33:31.959253   96181 main.go:141] libmachine: (addons-145541)   <memory unit='MiB'>4000</memory>
	I0729 17:33:31.959258   96181 main.go:141] libmachine: (addons-145541)   <vcpu>2</vcpu>
	I0729 17:33:31.959263   96181 main.go:141] libmachine: (addons-145541)   <features>
	I0729 17:33:31.959271   96181 main.go:141] libmachine: (addons-145541)     <acpi/>
	I0729 17:33:31.959278   96181 main.go:141] libmachine: (addons-145541)     <apic/>
	I0729 17:33:31.959286   96181 main.go:141] libmachine: (addons-145541)     <pae/>
	I0729 17:33:31.959296   96181 main.go:141] libmachine: (addons-145541)     
	I0729 17:33:31.959305   96181 main.go:141] libmachine: (addons-145541)   </features>
	I0729 17:33:31.959313   96181 main.go:141] libmachine: (addons-145541)   <cpu mode='host-passthrough'>
	I0729 17:33:31.959321   96181 main.go:141] libmachine: (addons-145541)   
	I0729 17:33:31.959346   96181 main.go:141] libmachine: (addons-145541)   </cpu>
	I0729 17:33:31.959358   96181 main.go:141] libmachine: (addons-145541)   <os>
	I0729 17:33:31.959364   96181 main.go:141] libmachine: (addons-145541)     <type>hvm</type>
	I0729 17:33:31.959373   96181 main.go:141] libmachine: (addons-145541)     <boot dev='cdrom'/>
	I0729 17:33:31.959384   96181 main.go:141] libmachine: (addons-145541)     <boot dev='hd'/>
	I0729 17:33:31.959393   96181 main.go:141] libmachine: (addons-145541)     <bootmenu enable='no'/>
	I0729 17:33:31.959406   96181 main.go:141] libmachine: (addons-145541)   </os>
	I0729 17:33:31.959417   96181 main.go:141] libmachine: (addons-145541)   <devices>
	I0729 17:33:31.959425   96181 main.go:141] libmachine: (addons-145541)     <disk type='file' device='cdrom'>
	I0729 17:33:31.959438   96181 main.go:141] libmachine: (addons-145541)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/boot2docker.iso'/>
	I0729 17:33:31.959448   96181 main.go:141] libmachine: (addons-145541)       <target dev='hdc' bus='scsi'/>
	I0729 17:33:31.959457   96181 main.go:141] libmachine: (addons-145541)       <readonly/>
	I0729 17:33:31.959464   96181 main.go:141] libmachine: (addons-145541)     </disk>
	I0729 17:33:31.959473   96181 main.go:141] libmachine: (addons-145541)     <disk type='file' device='disk'>
	I0729 17:33:31.959487   96181 main.go:141] libmachine: (addons-145541)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:33:31.959501   96181 main.go:141] libmachine: (addons-145541)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/addons-145541.rawdisk'/>
	I0729 17:33:31.959512   96181 main.go:141] libmachine: (addons-145541)       <target dev='hda' bus='virtio'/>
	I0729 17:33:31.959523   96181 main.go:141] libmachine: (addons-145541)     </disk>
	I0729 17:33:31.959533   96181 main.go:141] libmachine: (addons-145541)     <interface type='network'>
	I0729 17:33:31.959546   96181 main.go:141] libmachine: (addons-145541)       <source network='mk-addons-145541'/>
	I0729 17:33:31.959558   96181 main.go:141] libmachine: (addons-145541)       <model type='virtio'/>
	I0729 17:33:31.959591   96181 main.go:141] libmachine: (addons-145541)     </interface>
	I0729 17:33:31.959615   96181 main.go:141] libmachine: (addons-145541)     <interface type='network'>
	I0729 17:33:31.959629   96181 main.go:141] libmachine: (addons-145541)       <source network='default'/>
	I0729 17:33:31.959640   96181 main.go:141] libmachine: (addons-145541)       <model type='virtio'/>
	I0729 17:33:31.959651   96181 main.go:141] libmachine: (addons-145541)     </interface>
	I0729 17:33:31.959666   96181 main.go:141] libmachine: (addons-145541)     <serial type='pty'>
	I0729 17:33:31.959678   96181 main.go:141] libmachine: (addons-145541)       <target port='0'/>
	I0729 17:33:31.959689   96181 main.go:141] libmachine: (addons-145541)     </serial>
	I0729 17:33:31.959701   96181 main.go:141] libmachine: (addons-145541)     <console type='pty'>
	I0729 17:33:31.959711   96181 main.go:141] libmachine: (addons-145541)       <target type='serial' port='0'/>
	I0729 17:33:31.959722   96181 main.go:141] libmachine: (addons-145541)     </console>
	I0729 17:33:31.959733   96181 main.go:141] libmachine: (addons-145541)     <rng model='virtio'>
	I0729 17:33:31.959746   96181 main.go:141] libmachine: (addons-145541)       <backend model='random'>/dev/random</backend>
	I0729 17:33:31.959753   96181 main.go:141] libmachine: (addons-145541)     </rng>
	I0729 17:33:31.959760   96181 main.go:141] libmachine: (addons-145541)     
	I0729 17:33:31.959770   96181 main.go:141] libmachine: (addons-145541)     
	I0729 17:33:31.959782   96181 main.go:141] libmachine: (addons-145541)   </devices>
	I0729 17:33:31.959789   96181 main.go:141] libmachine: (addons-145541) </domain>
	I0729 17:33:31.959800   96181 main.go:141] libmachine: (addons-145541) 
	I0729 17:33:31.964203   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:61:14:7f in network default
	I0729 17:33:31.964820   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:31.964835   96181 main.go:141] libmachine: (addons-145541) Ensuring networks are active...
	I0729 17:33:31.965681   96181 main.go:141] libmachine: (addons-145541) Ensuring network default is active
	I0729 17:33:31.965993   96181 main.go:141] libmachine: (addons-145541) Ensuring network mk-addons-145541 is active
	I0729 17:33:31.966505   96181 main.go:141] libmachine: (addons-145541) Getting domain xml...
	I0729 17:33:31.967238   96181 main.go:141] libmachine: (addons-145541) Creating domain...
	I0729 17:33:32.402185   96181 main.go:141] libmachine: (addons-145541) Waiting to get IP...
	I0729 17:33:32.402902   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:32.403286   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:32.403326   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:32.403243   96203 retry.go:31] will retry after 287.300904ms: waiting for machine to come up
	I0729 17:33:32.691769   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:32.692260   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:32.692288   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:32.692209   96203 retry.go:31] will retry after 343.601877ms: waiting for machine to come up
	I0729 17:33:33.037850   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:33.038295   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:33.038327   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:33.038257   96203 retry.go:31] will retry after 301.189756ms: waiting for machine to come up
	I0729 17:33:33.340710   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:33.341111   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:33.341136   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:33.341058   96203 retry.go:31] will retry after 573.552478ms: waiting for machine to come up
	I0729 17:33:33.915817   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:33.916267   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:33.916309   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:33.916225   96203 retry.go:31] will retry after 667.32481ms: waiting for machine to come up
	I0729 17:33:34.584997   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:34.585451   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:34.585481   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:34.585413   96203 retry.go:31] will retry after 908.789948ms: waiting for machine to come up
	I0729 17:33:35.495355   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:35.495740   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:35.495769   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:35.495704   96203 retry.go:31] will retry after 850.715135ms: waiting for machine to come up
	I0729 17:33:36.348259   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:36.348761   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:36.348789   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:36.348718   96203 retry.go:31] will retry after 1.473559482s: waiting for machine to come up
	I0729 17:33:37.824316   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:37.824678   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:37.824705   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:37.824646   96203 retry.go:31] will retry after 1.831409289s: waiting for machine to come up
	I0729 17:33:39.658781   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:39.659200   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:39.659228   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:39.659155   96203 retry.go:31] will retry after 1.571944606s: waiting for machine to come up
	I0729 17:33:41.233074   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:41.233482   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:41.233516   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:41.233440   96203 retry.go:31] will retry after 1.965774308s: waiting for machine to come up
	I0729 17:33:43.200345   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:43.200741   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:43.200765   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:43.200690   96203 retry.go:31] will retry after 2.970460633s: waiting for machine to come up
	I0729 17:33:46.174691   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:46.175085   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:46.175116   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:46.175013   96203 retry.go:31] will retry after 2.890326841s: waiting for machine to come up
	I0729 17:33:49.068417   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:49.068783   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find current IP address of domain addons-145541 in network mk-addons-145541
	I0729 17:33:49.068804   96181 main.go:141] libmachine: (addons-145541) DBG | I0729 17:33:49.068729   96203 retry.go:31] will retry after 3.99642521s: waiting for machine to come up
	I0729 17:33:53.067633   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:53.067989   96181 main.go:141] libmachine: (addons-145541) Found IP for machine: 192.168.39.242
	I0729 17:33:53.068016   96181 main.go:141] libmachine: (addons-145541) Reserving static IP address...
	I0729 17:33:53.068030   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has current primary IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:53.068310   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find host DHCP lease matching {name: "addons-145541", mac: "52:54:00:25:f4:2d", ip: "192.168.39.242"} in network mk-addons-145541
	I0729 17:33:53.214731   96181 main.go:141] libmachine: (addons-145541) DBG | Getting to WaitForSSH function...
	I0729 17:33:53.214767   96181 main.go:141] libmachine: (addons-145541) Reserved static IP address: 192.168.39.242
	I0729 17:33:53.214787   96181 main.go:141] libmachine: (addons-145541) Waiting for SSH to be available...
	I0729 17:33:53.217476   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:53.217820   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541
	I0729 17:33:53.217844   96181 main.go:141] libmachine: (addons-145541) DBG | unable to find defined IP address of network mk-addons-145541 interface with MAC address 52:54:00:25:f4:2d
	I0729 17:33:53.217977   96181 main.go:141] libmachine: (addons-145541) DBG | Using SSH client type: external
	I0729 17:33:53.218002   96181 main.go:141] libmachine: (addons-145541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa (-rw-------)
	I0729 17:33:53.218047   96181 main.go:141] libmachine: (addons-145541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:33:53.218061   96181 main.go:141] libmachine: (addons-145541) DBG | About to run SSH command:
	I0729 17:33:53.218098   96181 main.go:141] libmachine: (addons-145541) DBG | exit 0
	I0729 17:33:53.221696   96181 main.go:141] libmachine: (addons-145541) DBG | SSH cmd err, output: exit status 255: 
	I0729 17:33:53.221714   96181 main.go:141] libmachine: (addons-145541) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 17:33:53.221721   96181 main.go:141] libmachine: (addons-145541) DBG | command : exit 0
	I0729 17:33:53.221726   96181 main.go:141] libmachine: (addons-145541) DBG | err     : exit status 255
	I0729 17:33:53.221733   96181 main.go:141] libmachine: (addons-145541) DBG | output  : 
	I0729 17:33:56.222144   96181 main.go:141] libmachine: (addons-145541) DBG | Getting to WaitForSSH function...
	I0729 17:33:56.224694   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.225062   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.225093   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.225204   96181 main.go:141] libmachine: (addons-145541) DBG | Using SSH client type: external
	I0729 17:33:56.225228   96181 main.go:141] libmachine: (addons-145541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa (-rw-------)
	I0729 17:33:56.225274   96181 main.go:141] libmachine: (addons-145541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:33:56.225286   96181 main.go:141] libmachine: (addons-145541) DBG | About to run SSH command:
	I0729 17:33:56.225316   96181 main.go:141] libmachine: (addons-145541) DBG | exit 0
	I0729 17:33:56.344777   96181 main.go:141] libmachine: (addons-145541) DBG | SSH cmd err, output: <nil>: 
	I0729 17:33:56.345061   96181 main.go:141] libmachine: (addons-145541) KVM machine creation complete!
	I0729 17:33:56.345375   96181 main.go:141] libmachine: (addons-145541) Calling .GetConfigRaw
	I0729 17:33:56.345885   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:56.346064   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:56.346238   96181 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:33:56.346253   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:33:56.347573   96181 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:33:56.347589   96181 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:33:56.347596   96181 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:33:56.347604   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.349869   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.350222   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.350247   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.350389   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:56.350590   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.350733   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.350888   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:56.351019   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:56.351202   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:56.351212   96181 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:33:56.448185   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:33:56.448216   96181 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:33:56.448229   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.450961   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.451286   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.451321   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.451437   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:56.451646   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.451830   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.451990   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:56.452306   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:56.452489   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:56.452501   96181 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:33:56.549426   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:33:56.549496   96181 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:33:56.549505   96181 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:33:56.549513   96181 main.go:141] libmachine: (addons-145541) Calling .GetMachineName
	I0729 17:33:56.549827   96181 buildroot.go:166] provisioning hostname "addons-145541"
	I0729 17:33:56.549860   96181 main.go:141] libmachine: (addons-145541) Calling .GetMachineName
	I0729 17:33:56.550048   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.552558   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.552891   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.552919   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.553018   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:56.553189   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.553354   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.553474   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:56.553626   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:56.553798   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:56.553810   96181 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-145541 && echo "addons-145541" | sudo tee /etc/hostname
	I0729 17:33:56.662617   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-145541
	
	I0729 17:33:56.662646   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.665196   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.665552   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.665586   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.665810   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:56.666023   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.666202   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.666346   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:56.666520   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:56.666680   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:56.666694   96181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-145541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-145541/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-145541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:33:56.769050   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:33:56.769091   96181 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 17:33:56.769118   96181 buildroot.go:174] setting up certificates
	I0729 17:33:56.769141   96181 provision.go:84] configureAuth start
	I0729 17:33:56.769154   96181 main.go:141] libmachine: (addons-145541) Calling .GetMachineName
	I0729 17:33:56.769458   96181 main.go:141] libmachine: (addons-145541) Calling .GetIP
	I0729 17:33:56.771893   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.772247   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.772268   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.772420   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.774712   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.774985   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.775010   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.775126   96181 provision.go:143] copyHostCerts
	I0729 17:33:56.775207   96181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 17:33:56.775332   96181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 17:33:56.775403   96181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 17:33:56.775461   96181 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.addons-145541 san=[127.0.0.1 192.168.39.242 addons-145541 localhost minikube]
	I0729 17:33:56.904923   96181 provision.go:177] copyRemoteCerts
	I0729 17:33:56.905004   96181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:33:56.905031   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:56.907713   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.908041   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:56.908077   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:56.908245   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:56.908426   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:56.908575   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:56.908702   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:33:56.986835   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:33:57.010570   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 17:33:57.033836   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:33:57.056508   96181 provision.go:87] duration metric: took 287.352961ms to configureAuth
	I0729 17:33:57.056534   96181 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:33:57.056692   96181 config.go:182] Loaded profile config "addons-145541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:33:57.056766   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:57.059447   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.059757   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.059785   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.059944   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:57.060103   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.060230   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.060348   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:57.060481   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:57.060673   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:57.060693   96181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:33:57.312436   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:33:57.312468   96181 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:33:57.312479   96181 main.go:141] libmachine: (addons-145541) Calling .GetURL
	I0729 17:33:57.313738   96181 main.go:141] libmachine: (addons-145541) DBG | Using libvirt version 6000000
	I0729 17:33:57.315599   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.315906   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.315937   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.316047   96181 main.go:141] libmachine: Docker is up and running!
	I0729 17:33:57.316061   96181 main.go:141] libmachine: Reticulating splines...
	I0729 17:33:57.316070   96181 client.go:171] duration metric: took 25.974318348s to LocalClient.Create
	I0729 17:33:57.316097   96181 start.go:167] duration metric: took 25.974384032s to libmachine.API.Create "addons-145541"
	I0729 17:33:57.316110   96181 start.go:293] postStartSetup for "addons-145541" (driver="kvm2")
	I0729 17:33:57.316126   96181 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:33:57.316150   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:57.316414   96181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:33:57.316439   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:57.318293   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.318591   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.318618   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.318719   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:57.318926   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.319084   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:57.319231   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:33:57.394349   96181 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:33:57.398422   96181 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:33:57.398443   96181 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 17:33:57.398511   96181 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 17:33:57.398536   96181 start.go:296] duration metric: took 82.416834ms for postStartSetup
	I0729 17:33:57.398569   96181 main.go:141] libmachine: (addons-145541) Calling .GetConfigRaw
	I0729 17:33:57.399137   96181 main.go:141] libmachine: (addons-145541) Calling .GetIP
	I0729 17:33:57.401585   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.401902   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.401929   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.402116   96181 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/config.json ...
	I0729 17:33:57.402293   96181 start.go:128] duration metric: took 26.078958709s to createHost
	I0729 17:33:57.402332   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:57.404344   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.404625   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.404649   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.404756   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:57.404958   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.405105   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.405222   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:57.405395   96181 main.go:141] libmachine: Using SSH client type: native
	I0729 17:33:57.405556   96181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0729 17:33:57.405566   96181 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:33:57.501341   96181 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722274437.480741329
	
	I0729 17:33:57.501367   96181 fix.go:216] guest clock: 1722274437.480741329
	I0729 17:33:57.501379   96181 fix.go:229] Guest: 2024-07-29 17:33:57.480741329 +0000 UTC Remote: 2024-07-29 17:33:57.402304592 +0000 UTC m=+26.183051826 (delta=78.436737ms)
	I0729 17:33:57.501411   96181 fix.go:200] guest clock delta is within tolerance: 78.436737ms
	I0729 17:33:57.501423   96181 start.go:83] releasing machines lock for "addons-145541", held for 26.178190487s
	I0729 17:33:57.501468   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:57.501729   96181 main.go:141] libmachine: (addons-145541) Calling .GetIP
	I0729 17:33:57.504025   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.504381   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.504412   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.504496   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:57.505058   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:57.505229   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:33:57.505345   96181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:33:57.505399   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:57.505447   96181 ssh_runner.go:195] Run: cat /version.json
	I0729 17:33:57.505472   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:33:57.507746   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.508029   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.508115   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.508142   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.508253   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:57.508401   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.508463   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:57.508487   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:57.508530   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:57.508668   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:33:57.508691   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:33:57.508827   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:33:57.509002   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:33:57.509130   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:33:57.601477   96181 ssh_runner.go:195] Run: systemctl --version
	I0729 17:33:57.607389   96181 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:33:57.769040   96181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:33:57.775545   96181 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:33:57.775650   96181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:33:57.792953   96181 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:33:57.792979   96181 start.go:495] detecting cgroup driver to use...
	I0729 17:33:57.793045   96181 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:33:57.811468   96181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:33:57.826077   96181 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:33:57.826142   96181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:33:57.840097   96181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:33:57.853831   96181 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:33:57.972561   96181 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:33:58.136458   96181 docker.go:233] disabling docker service ...
	I0729 17:33:58.136530   96181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:33:58.151265   96181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:33:58.164102   96181 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:33:58.276623   96181 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:33:58.392511   96181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:33:58.406905   96181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:33:58.424771   96181 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:33:58.424833   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.435285   96181 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:33:58.435352   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.445861   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.456768   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.467842   96181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:33:58.478680   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.491453   96181 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.509245   96181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:33:58.519956   96181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:33:58.529329   96181 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:33:58.529390   96181 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:33:58.541915   96181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:33:58.551442   96181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:33:58.674796   96181 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:33:59.052308   96181 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:33:59.052400   96181 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:33:59.057314   96181 start.go:563] Will wait 60s for crictl version
	I0729 17:33:59.057384   96181 ssh_runner.go:195] Run: which crictl
	I0729 17:33:59.061260   96181 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:33:59.102524   96181 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:33:59.102645   96181 ssh_runner.go:195] Run: crio --version
	I0729 17:33:59.129700   96181 ssh_runner.go:195] Run: crio --version
	I0729 17:33:59.275153   96181 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:33:59.338558   96181 main.go:141] libmachine: (addons-145541) Calling .GetIP
	I0729 17:33:59.341415   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:59.341713   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:33:59.341742   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:33:59.341974   96181 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:33:59.346512   96181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:33:59.359083   96181 kubeadm.go:883] updating cluster {Name:addons-145541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-145541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 17:33:59.359229   96181 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:33:59.359273   96181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:33:59.390018   96181 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 17:33:59.390090   96181 ssh_runner.go:195] Run: which lz4
	I0729 17:33:59.394262   96181 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 17:33:59.398567   96181 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 17:33:59.398608   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 17:34:00.710651   96181 crio.go:462] duration metric: took 1.316514502s to copy over tarball
	I0729 17:34:00.710724   96181 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 17:34:02.900978   96181 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.190220668s)
	I0729 17:34:02.901012   96181 crio.go:469] duration metric: took 2.190328331s to extract the tarball
	I0729 17:34:02.901023   96181 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 17:34:02.938961   96181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:34:02.982550   96181 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:34:02.982582   96181 cache_images.go:84] Images are preloaded, skipping loading
	I0729 17:34:02.982592   96181 kubeadm.go:934] updating node { 192.168.39.242 8443 v1.30.3 crio true true} ...
	I0729 17:34:02.982725   96181 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-145541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-145541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:34:02.982792   96181 ssh_runner.go:195] Run: crio config
	I0729 17:34:03.029296   96181 cni.go:84] Creating CNI manager for ""
	I0729 17:34:03.029318   96181 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 17:34:03.029328   96181 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:34:03.029350   96181 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.242 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-145541 NodeName:addons-145541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:34:03.029487   96181 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-145541"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:34:03.029548   96181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:34:03.039754   96181 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:34:03.039832   96181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 17:34:03.049461   96181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 17:34:03.065464   96181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:34:03.081096   96181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 17:34:03.096562   96181 ssh_runner.go:195] Run: grep 192.168.39.242	control-plane.minikube.internal$ /etc/hosts
	I0729 17:34:03.100157   96181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:34:03.112040   96181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:34:03.236931   96181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:34:03.253661   96181 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541 for IP: 192.168.39.242
	I0729 17:34:03.253685   96181 certs.go:194] generating shared ca certs ...
	I0729 17:34:03.253704   96181 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.253865   96181 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 17:34:03.435416   96181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt ...
	I0729 17:34:03.435447   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt: {Name:mkcdc05dbad796c476f02d51b3a2d88a15d0d683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.435610   96181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key ...
	I0729 17:34:03.435621   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key: {Name:mk0b8766ee3521c080cdd099e5be695daddeacb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.435695   96181 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 17:34:03.479194   96181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt ...
	I0729 17:34:03.479221   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt: {Name:mk0a16b6fef48a2455bf549200f59231422c45e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.479382   96181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key ...
	I0729 17:34:03.479395   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key: {Name:mk91b34d44bbab81d266825125925925d9e53f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.479468   96181 certs.go:256] generating profile certs ...
	I0729 17:34:03.479549   96181 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.key
	I0729 17:34:03.479563   96181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt with IP's: []
	I0729 17:34:03.551828   96181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt ...
	I0729 17:34:03.551853   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: {Name:mk2ca63031f899e556ef4a518b28dbec6a1faf6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.551991   96181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.key ...
	I0729 17:34:03.552001   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.key: {Name:mk270c9e5c3cb7083a0750c829f349028aecab2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.552065   96181 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key.bccb6cf7
	I0729 17:34:03.552083   96181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt.bccb6cf7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.242]
	I0729 17:34:03.671667   96181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt.bccb6cf7 ...
	I0729 17:34:03.671696   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt.bccb6cf7: {Name:mkfcad8a7b6f08239890db5a75dd879612f7fc5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.671839   96181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key.bccb6cf7 ...
	I0729 17:34:03.671857   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key.bccb6cf7: {Name:mkc1cbe6197ba105da01b2e8ce9bf54e050e4c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.671951   96181 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt.bccb6cf7 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt
	I0729 17:34:03.672038   96181 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key.bccb6cf7 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key
	I0729 17:34:03.672103   96181 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.key
	I0729 17:34:03.672128   96181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.crt with IP's: []
	I0729 17:34:03.763967   96181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.crt ...
	I0729 17:34:03.763995   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.crt: {Name:mkf837c991a91f96016882e96dd66956c2f5bd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.764141   96181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.key ...
	I0729 17:34:03.764151   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.key: {Name:mkca8e35f0205f8941a850440a2051578e9359b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:03.764306   96181 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:34:03.764339   96181 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:34:03.764363   96181 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:34:03.764389   96181 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 17:34:03.765019   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:34:03.789314   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:34:03.811545   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:34:03.833421   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:34:03.855707   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 17:34:03.877692   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 17:34:03.899963   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:34:03.923645   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:34:03.947570   96181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:34:03.969085   96181 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:34:03.989183   96181 ssh_runner.go:195] Run: openssl version
	I0729 17:34:03.995603   96181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:34:04.006642   96181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:34:04.011204   96181 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:34:04.011262   96181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:34:04.016987   96181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:34:04.027693   96181 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:34:04.031555   96181 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:34:04.031607   96181 kubeadm.go:392] StartCluster: {Name:addons-145541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-145541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:34:04.031705   96181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 17:34:04.031771   96181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 17:34:04.067523   96181 cri.go:89] found id: ""
	I0729 17:34:04.067602   96181 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 17:34:04.077578   96181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 17:34:04.087390   96181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 17:34:04.096793   96181 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 17:34:04.096823   96181 kubeadm.go:157] found existing configuration files:
	
	I0729 17:34:04.096887   96181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 17:34:04.105701   96181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 17:34:04.105762   96181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 17:34:04.114971   96181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 17:34:04.124022   96181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 17:34:04.124075   96181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 17:34:04.133329   96181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 17:34:04.142383   96181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 17:34:04.142434   96181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 17:34:04.151903   96181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 17:34:04.161268   96181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 17:34:04.161333   96181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 17:34:04.170791   96181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 17:34:04.362502   96181 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 17:34:14.912704   96181 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 17:34:14.912776   96181 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 17:34:14.912883   96181 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 17:34:14.913013   96181 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 17:34:14.913133   96181 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 17:34:14.913271   96181 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 17:34:14.914905   96181 out.go:204]   - Generating certificates and keys ...
	I0729 17:34:14.914989   96181 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 17:34:14.915079   96181 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 17:34:14.915150   96181 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 17:34:14.915203   96181 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 17:34:14.915261   96181 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 17:34:14.915346   96181 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 17:34:14.915433   96181 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 17:34:14.915597   96181 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-145541 localhost] and IPs [192.168.39.242 127.0.0.1 ::1]
	I0729 17:34:14.915653   96181 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 17:34:14.915766   96181 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-145541 localhost] and IPs [192.168.39.242 127.0.0.1 ::1]
	I0729 17:34:14.915851   96181 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 17:34:14.915945   96181 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 17:34:14.916008   96181 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 17:34:14.916087   96181 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 17:34:14.916174   96181 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 17:34:14.916259   96181 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 17:34:14.916316   96181 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 17:34:14.916369   96181 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 17:34:14.916414   96181 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 17:34:14.916488   96181 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 17:34:14.916562   96181 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 17:34:14.918225   96181 out.go:204]   - Booting up control plane ...
	I0729 17:34:14.918330   96181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 17:34:14.918398   96181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 17:34:14.918465   96181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 17:34:14.918600   96181 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 17:34:14.918673   96181 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 17:34:14.918709   96181 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 17:34:14.918820   96181 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 17:34:14.918889   96181 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 17:34:14.918938   96181 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.993498ms
	I0729 17:34:14.919006   96181 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 17:34:14.919079   96181 kubeadm.go:310] [api-check] The API server is healthy after 5.001415465s
	I0729 17:34:14.919171   96181 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 17:34:14.919302   96181 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 17:34:14.919392   96181 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 17:34:14.919549   96181 kubeadm.go:310] [mark-control-plane] Marking the node addons-145541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 17:34:14.919602   96181 kubeadm.go:310] [bootstrap-token] Using token: a4jki6.7rj17ttaoqkipt8u
	I0729 17:34:14.920937   96181 out.go:204]   - Configuring RBAC rules ...
	I0729 17:34:14.921055   96181 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 17:34:14.921135   96181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 17:34:14.921249   96181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 17:34:14.921396   96181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 17:34:14.921509   96181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 17:34:14.921618   96181 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 17:34:14.921757   96181 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 17:34:14.921795   96181 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 17:34:14.921841   96181 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 17:34:14.921850   96181 kubeadm.go:310] 
	I0729 17:34:14.921910   96181 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 17:34:14.921918   96181 kubeadm.go:310] 
	I0729 17:34:14.921998   96181 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 17:34:14.922007   96181 kubeadm.go:310] 
	I0729 17:34:14.922039   96181 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 17:34:14.922093   96181 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 17:34:14.922141   96181 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 17:34:14.922147   96181 kubeadm.go:310] 
	I0729 17:34:14.922195   96181 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 17:34:14.922206   96181 kubeadm.go:310] 
	I0729 17:34:14.922252   96181 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 17:34:14.922259   96181 kubeadm.go:310] 
	I0729 17:34:14.922320   96181 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 17:34:14.922423   96181 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 17:34:14.922504   96181 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 17:34:14.922516   96181 kubeadm.go:310] 
	I0729 17:34:14.922588   96181 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 17:34:14.922651   96181 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 17:34:14.922657   96181 kubeadm.go:310] 
	I0729 17:34:14.922744   96181 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a4jki6.7rj17ttaoqkipt8u \
	I0729 17:34:14.922843   96181 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f \
	I0729 17:34:14.922864   96181 kubeadm.go:310] 	--control-plane 
	I0729 17:34:14.922868   96181 kubeadm.go:310] 
	I0729 17:34:14.922935   96181 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 17:34:14.922941   96181 kubeadm.go:310] 
	I0729 17:34:14.923010   96181 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a4jki6.7rj17ttaoqkipt8u \
	I0729 17:34:14.923111   96181 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f 
	I0729 17:34:14.923123   96181 cni.go:84] Creating CNI manager for ""
	I0729 17:34:14.923130   96181 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 17:34:14.924657   96181 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 17:34:14.925913   96181 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 17:34:14.936704   96181 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 17:34:14.954347   96181 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 17:34:14.954447   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:14.954507   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-145541 minikube.k8s.io/updated_at=2024_07_29T17_34_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=addons-145541 minikube.k8s.io/primary=true
	I0729 17:34:14.971084   96181 ops.go:34] apiserver oom_adj: -16
	I0729 17:34:15.062005   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:15.562520   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:16.062052   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:16.562207   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:17.062276   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:17.562800   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:18.062785   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:18.562169   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:19.062060   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:19.562165   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:20.062918   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:20.562316   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:21.062340   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:21.562521   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:22.062718   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:22.562933   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:23.062033   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:23.562861   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:24.062196   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:24.562029   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:25.063040   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:25.562363   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:26.062342   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:26.562427   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:27.062083   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:27.562859   96181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:34:27.662394   96181 kubeadm.go:1113] duration metric: took 12.708011497s to wait for elevateKubeSystemPrivileges
	I0729 17:34:27.662441   96181 kubeadm.go:394] duration metric: took 23.630838114s to StartCluster
	I0729 17:34:27.662464   96181 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:27.662586   96181 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:34:27.663103   96181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:34:27.663306   96181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 17:34:27.663350   96181 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:34:27.663405   96181 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 17:34:27.663492   96181 addons.go:69] Setting yakd=true in profile "addons-145541"
	I0729 17:34:27.663503   96181 addons.go:69] Setting cloud-spanner=true in profile "addons-145541"
	I0729 17:34:27.663533   96181 addons.go:234] Setting addon yakd=true in "addons-145541"
	I0729 17:34:27.663536   96181 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-145541"
	I0729 17:34:27.663549   96181 addons.go:234] Setting addon cloud-spanner=true in "addons-145541"
	I0729 17:34:27.663542   96181 addons.go:69] Setting metrics-server=true in profile "addons-145541"
	I0729 17:34:27.663566   96181 config.go:182] Loaded profile config "addons-145541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:34:27.663587   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.663595   96181 addons.go:69] Setting registry=true in profile "addons-145541"
	I0729 17:34:27.663607   96181 addons.go:234] Setting addon metrics-server=true in "addons-145541"
	I0729 17:34:27.663577   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.663625   96181 addons.go:69] Setting volcano=true in profile "addons-145541"
	I0729 17:34:27.663634   96181 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-145541"
	I0729 17:34:27.663654   96181 addons.go:234] Setting addon volcano=true in "addons-145541"
	I0729 17:34:27.663655   96181 addons.go:69] Setting gcp-auth=true in profile "addons-145541"
	I0729 17:34:27.663674   96181 mustload.go:65] Loading cluster: addons-145541
	I0729 17:34:27.663683   96181 addons.go:69] Setting volumesnapshots=true in profile "addons-145541"
	I0729 17:34:27.663685   96181 addons.go:69] Setting storage-provisioner=true in profile "addons-145541"
	I0729 17:34:27.663705   96181 addons.go:234] Setting addon storage-provisioner=true in "addons-145541"
	I0729 17:34:27.663708   96181 addons.go:234] Setting addon volumesnapshots=true in "addons-145541"
	I0729 17:34:27.663490   96181 addons.go:69] Setting default-storageclass=true in profile "addons-145541"
	I0729 17:34:27.663727   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.663734   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.663742   96181 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-145541"
	I0729 17:34:27.663861   96181 config.go:182] Loaded profile config "addons-145541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:34:27.664024   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664064   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.663674   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.664152   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664195   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664194   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.663649   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.664222   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664259   96181 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-145541"
	I0729 17:34:27.663587   96181 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-145541"
	I0729 17:34:27.664295   96181 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-145541"
	I0729 17:34:27.664327   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.664427   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664450   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.663616   96181 addons.go:234] Setting addon registry=true in "addons-145541"
	I0729 17:34:27.663674   96181 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-145541"
	I0729 17:34:27.664519   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664522   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664535   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664555   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664613   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.664625   96181 addons.go:69] Setting helm-tiller=true in profile "addons-145541"
	I0729 17:34:27.664646   96181 addons.go:69] Setting inspektor-gadget=true in profile "addons-145541"
	I0729 17:34:27.664656   96181 addons.go:69] Setting ingress=true in profile "addons-145541"
	I0729 17:34:27.664672   96181 addons.go:234] Setting addon ingress=true in "addons-145541"
	I0729 17:34:27.664648   96181 addons.go:234] Setting addon helm-tiller=true in "addons-145541"
	I0729 17:34:27.664675   96181 addons.go:234] Setting addon inspektor-gadget=true in "addons-145541"
	I0729 17:34:27.664705   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664738   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664619   96181 addons.go:69] Setting ingress-dns=true in profile "addons-145541"
	I0729 17:34:27.664905   96181 addons.go:234] Setting addon ingress-dns=true in "addons-145541"
	I0729 17:34:27.664920   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664931   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.664940   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664965   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.664987   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.664998   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.665047   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.665070   96181 out.go:177] * Verifying Kubernetes components...
	I0729 17:34:27.665109   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.665130   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.665329   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.665339   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.665346   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.665357   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.665361   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.665362   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.665376   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.665400   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.665493   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.665646   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.680992   96181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:34:27.684128   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0729 17:34:27.684142   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44299
	I0729 17:34:27.684277   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0729 17:34:27.684700   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.684963   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.685344   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.685365   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.685773   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.685936   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42421
	I0729 17:34:27.686474   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.686506   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.693026   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.693093   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41415
	I0729 17:34:27.693126   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.693141   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.693584   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.693872   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.693930   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.694320   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.694394   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.695023   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.695074   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.695478   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.696664   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I0729 17:34:27.697379   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0729 17:34:27.697848   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.698394   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.698414   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.698748   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.699308   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.699350   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.701936   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0729 17:34:27.702427   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.702969   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.702988   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.703457   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.704041   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.704079   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.705305   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.705348   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.705547   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.705580   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.705808   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.705848   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.705305   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.711607   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.711647   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.711500   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.711735   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.712449   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.712470   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.711557   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.713024   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.713042   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.713501   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.714097   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.714158   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.715340   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.715363   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.718957   96181 addons.go:234] Setting addon default-storageclass=true in "addons-145541"
	I0729 17:34:27.719004   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.719346   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.719367   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.735020   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37927
	I0729 17:34:27.739011   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0729 17:34:27.739578   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.740182   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.740203   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.740638   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.740902   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.741508   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.742203   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.742231   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.742630   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.742904   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.742997   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36209
	I0729 17:34:27.743060   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.743094   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:27.743107   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:27.743274   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:27.743287   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:27.743296   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:27.743303   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:27.743422   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.743907   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.743927   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.744279   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.744930   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.745021   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.745059   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.745669   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0729 17:34:27.746229   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.746624   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0729 17:34:27.746877   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.746904   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.747237   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.747416   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.748284   96181 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:34:27.749091   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.749161   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:27.749176   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 17:34:27.749270   96181 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 17:34:27.749931   96181 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:34:27.749952   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 17:34:27.749970   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.750073   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.750189   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0729 17:34:27.750484   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.750586   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0729 17:34:27.750940   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.750955   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.751166   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.751293   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.751330   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.751341   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.751462   96181 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 17:34:27.751979   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.752035   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.752268   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42681
	I0729 17:34:27.752379   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.752549   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.752567   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.752747   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0729 17:34:27.752972   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.753085   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.753682   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.753719   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.753719   96181 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 17:34:27.754076   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.754092   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.754584   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.754654   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.754960   96181 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 17:34:27.754979   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 17:34:27.755002   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.755078   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.755093   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.755118   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I0729 17:34:27.755191   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.755212   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.755443   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.755544   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.755741   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.756068   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.756371   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.756387   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.756598   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.756616   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.756935   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0729 17:34:27.757057   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.757254   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.757340   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.757354   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.757536   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.757800   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.757818   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.758563   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.758768   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.759004   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.759335   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.759794   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.759835   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.760107   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.760134   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.760595   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.760879   96181 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 17:34:27.760996   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.761463   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.761540   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.761856   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.762100   96181 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 17:34:27.762156   96181 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 17:34:27.762164   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.762167   96181 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 17:34:27.762186   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.762417   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.762811   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.762866   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.762900   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.764103   96181 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 17:34:27.764120   96181 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 17:34:27.764146   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.766607   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.767066   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
	I0729 17:34:27.767566   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.767615   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.767877   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.768118   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.768355   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.768597   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.769535   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.769958   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.769977   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.770231   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.770434   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.770615   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.770878   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.771504   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.772185   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.772202   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.772690   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.773042   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.774660   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.776256   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 17:34:27.777648   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 17:34:27.777667   96181 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 17:34:27.777687   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.779616   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38759
	I0729 17:34:27.780147   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.780716   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.780734   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.780794   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.781057   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.781075   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.781119   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.781303   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.781394   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.781586   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.781765   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.781939   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.784009   96181 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-145541"
	I0729 17:34:27.784067   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:27.784443   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.784552   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.784773   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0729 17:34:27.784980   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38371
	I0729 17:34:27.785140   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35489
	I0729 17:34:27.785651   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.785775   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.785852   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.785925   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I0729 17:34:27.786220   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.786232   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.786324   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.786331   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.786413   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.786431   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.786676   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.786770   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.786946   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.787112   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.787153   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.787990   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.788622   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.788666   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.789094   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.789910   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.789935   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.790108   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0729 17:34:27.790308   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.790521   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.790602   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.791034   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 17:34:27.791131   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.791153   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.791673   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.791846   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.792244   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.793482   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.793826   96181 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0729 17:34:27.793826   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 17:34:27.794872   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45385
	I0729 17:34:27.795007   96181 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0729 17:34:27.795323   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.796000   96181 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 17:34:27.796021   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 17:34:27.796041   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.796670   96181 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 17:34:27.796689   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 17:34:27.796707   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.796207   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.796786   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.797214   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.797828   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.797870   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.798633   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 17:34:27.799855   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 17:34:27.800617   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.800827   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.801139   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.801167   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.801200   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.801211   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.801401   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.801401   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.801572   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.801712   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.801755   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.801855   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.801920   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.802084   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.803851   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 17:34:27.805107   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 17:34:27.806312   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 17:34:27.807604   96181 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 17:34:27.808681   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 17:34:27.808701   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 17:34:27.808740   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.809960   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44649
	I0729 17:34:27.809966   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42363
	I0729 17:34:27.810408   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.811036   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.811051   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.811454   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.811530   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.811614   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0729 17:34:27.812579   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:27.812619   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:27.812930   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.812958   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.812980   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.813316   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.813440   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.813460   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.813510   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.813569   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.814638   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.814797   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.814812   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.814960   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.815090   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.815191   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.815487   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0729 17:34:27.815595   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.815878   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.815969   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.816408   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.816426   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.816478   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.817201   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.817236   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46533
	I0729 17:34:27.817527   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.817609   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.818164   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.818190   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.818253   96181 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 17:34:27.818578   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.818659   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.818799   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.818957   96181 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 17:34:27.819834   96181 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 17:34:27.819854   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 17:34:27.819872   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.820472   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.820637   96181 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 17:34:27.820653   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 17:34:27.820670   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.820877   96181 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 17:34:27.820890   96181 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 17:34:27.820907   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.821070   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.821149   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0729 17:34:27.821629   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.822024   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.822047   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.822432   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.822632   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.824886   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.825056   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.825345   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.825380   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.825395   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.825577   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.825742   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.825806   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.825823   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.825853   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.825962   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.826092   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.826204   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.826307   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.826566   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.826686   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.826719   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.826741   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.826880   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.827080   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.827237   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.827353   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	W0729 17:34:27.829579   96181 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42572->192.168.39.242:22: read: connection reset by peer
	I0729 17:34:27.830266   96181 retry.go:31] will retry after 204.825198ms: ssh: handshake failed: read tcp 192.168.39.1:42572->192.168.39.242:22: read: connection reset by peer
	I0729 17:34:27.830304   96181 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 17:34:27.831799   96181 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 17:34:27.832765   96181 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 17:34:27.834131   96181 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 17:34:27.834138   96181 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 17:34:27.834194   96181 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 17:34:27.834226   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.836119   96181 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 17:34:27.836141   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 17:34:27.836165   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.837582   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.838171   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.838200   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.838316   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.838504   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.838653   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.838790   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.839165   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.839530   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.839557   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.839695   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.839877   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.840029   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.840167   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:27.846963   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I0729 17:34:27.847469   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:27.848038   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:27.848057   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:27.848425   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:27.848609   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:27.850321   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:27.852299   96181 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 17:34:27.853720   96181 out.go:177]   - Using image docker.io/busybox:stable
	I0729 17:34:27.855159   96181 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 17:34:27.855174   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 17:34:27.855187   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:27.857747   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.858071   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:27.858105   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:27.858274   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:27.858432   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:27.858566   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:27.858692   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	W0729 17:34:28.037417   96181 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42612->192.168.39.242:22: read: connection reset by peer
	I0729 17:34:28.037452   96181 retry.go:31] will retry after 403.151739ms: ssh: handshake failed: read tcp 192.168.39.1:42612->192.168.39.242:22: read: connection reset by peer
	I0729 17:34:28.141384   96181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 17:34:28.141395   96181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:34:28.183015   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 17:34:28.202367   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 17:34:28.202393   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 17:34:28.203450   96181 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 17:34:28.203468   96181 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 17:34:28.277314   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:34:28.289222   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 17:34:28.299650   96181 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 17:34:28.299672   96181 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 17:34:28.310544   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 17:34:28.315309   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 17:34:28.341325   96181 node_ready.go:35] waiting up to 6m0s for node "addons-145541" to be "Ready" ...
	I0729 17:34:28.344259   96181 node_ready.go:49] node "addons-145541" has status "Ready":"True"
	I0729 17:34:28.344279   96181 node_ready.go:38] duration metric: took 2.929261ms for node "addons-145541" to be "Ready" ...
	I0729 17:34:28.344286   96181 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:34:28.350504   96181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dfrfm" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:28.381747   96181 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 17:34:28.381770   96181 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 17:34:28.414157   96181 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 17:34:28.414180   96181 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 17:34:28.427125   96181 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 17:34:28.427148   96181 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 17:34:28.431969   96181 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 17:34:28.431991   96181 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 17:34:28.433988   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 17:34:28.435218   96181 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 17:34:28.435232   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 17:34:28.440133   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 17:34:28.440156   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 17:34:28.511422   96181 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 17:34:28.511450   96181 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 17:34:28.586992   96181 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 17:34:28.587023   96181 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 17:34:28.677952   96181 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 17:34:28.677974   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 17:34:28.688718   96181 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 17:34:28.688740   96181 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 17:34:28.735118   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 17:34:28.735144   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 17:34:28.738218   96181 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 17:34:28.738238   96181 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 17:34:28.740523   96181 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 17:34:28.740546   96181 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 17:34:28.757326   96181 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 17:34:28.757355   96181 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 17:34:28.806163   96181 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 17:34:28.806190   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 17:34:28.872016   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 17:34:28.898521   96181 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 17:34:28.898545   96181 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 17:34:28.910448   96181 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 17:34:28.910468   96181 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 17:34:28.948434   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 17:34:28.948456   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 17:34:28.967528   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 17:34:28.967552   96181 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 17:34:28.968754   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 17:34:29.017422   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 17:34:29.027884   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 17:34:29.138010   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 17:34:29.146515   96181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 17:34:29.146547   96181 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 17:34:29.149661   96181 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 17:34:29.149679   96181 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 17:34:29.210077   96181 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 17:34:29.210102   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 17:34:29.283889   96181 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 17:34:29.283914   96181 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 17:34:29.415870   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 17:34:29.415898   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 17:34:29.467628   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 17:34:29.568459   96181 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 17:34:29.568495   96181 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 17:34:29.760001   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 17:34:29.760027   96181 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 17:34:29.846135   96181 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 17:34:29.846166   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 17:34:29.970844   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 17:34:29.970875   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 17:34:30.127847   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 17:34:30.215724   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 17:34:30.215749   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 17:34:30.356558   96181 pod_ready.go:102] pod "coredns-7db6d8ff4d-dfrfm" in "kube-system" namespace has status "Ready":"False"
	I0729 17:34:30.497860   96181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 17:34:30.497891   96181 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 17:34:30.664938   96181 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.523517069s)
	I0729 17:34:30.664973   96181 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 17:34:30.735058   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 17:34:31.183627   96181 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-145541" context rescaled to 1 replicas
	I0729 17:34:32.499236   96181 pod_ready.go:102] pod "coredns-7db6d8ff4d-dfrfm" in "kube-system" namespace has status "Ready":"False"
	I0729 17:34:33.888682   96181 pod_ready.go:92] pod "coredns-7db6d8ff4d-dfrfm" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:33.888706   96181 pod_ready.go:81] duration metric: took 5.538178544s for pod "coredns-7db6d8ff4d-dfrfm" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:33.888719   96181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:34.794666   96181 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 17:34:34.794708   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:34.797613   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:34.798086   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:34.798116   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:34.798299   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:34.798536   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:34.798722   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:34.798859   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:35.208998   96181 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 17:34:35.366046   96181 addons.go:234] Setting addon gcp-auth=true in "addons-145541"
	I0729 17:34:35.366107   96181 host.go:66] Checking if "addons-145541" exists ...
	I0729 17:34:35.366406   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:35.366435   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:35.381671   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0729 17:34:35.382145   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:35.382654   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:35.382677   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:35.383014   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:35.383471   96181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:34:35.383502   96181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:34:35.398272   96181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42119
	I0729 17:34:35.398719   96181 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:34:35.399238   96181 main.go:141] libmachine: Using API Version  1
	I0729 17:34:35.399268   96181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:34:35.399573   96181 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:34:35.399757   96181 main.go:141] libmachine: (addons-145541) Calling .GetState
	I0729 17:34:35.401244   96181 main.go:141] libmachine: (addons-145541) Calling .DriverName
	I0729 17:34:35.401473   96181 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 17:34:35.401494   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHHostname
	I0729 17:34:35.404600   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:35.405082   96181 main.go:141] libmachine: (addons-145541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:f4:2d", ip: ""} in network mk-addons-145541: {Iface:virbr1 ExpiryTime:2024-07-29 18:33:45 +0000 UTC Type:0 Mac:52:54:00:25:f4:2d Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:addons-145541 Clientid:01:52:54:00:25:f4:2d}
	I0729 17:34:35.405114   96181 main.go:141] libmachine: (addons-145541) DBG | domain addons-145541 has defined IP address 192.168.39.242 and MAC address 52:54:00:25:f4:2d in network mk-addons-145541
	I0729 17:34:35.405289   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHPort
	I0729 17:34:35.405446   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHKeyPath
	I0729 17:34:35.405597   96181 main.go:141] libmachine: (addons-145541) Calling .GetSSHUsername
	I0729 17:34:35.405702   96181 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/addons-145541/id_rsa Username:docker}
	I0729 17:34:35.903712   96181 pod_ready.go:102] pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace has status "Ready":"False"
	I0729 17:34:36.628397   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.351042747s)
	I0729 17:34:36.628454   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.339203241s)
	I0729 17:34:36.628495   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628513   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628541   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.313213577s)
	I0729 17:34:36.628499   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.317924577s)
	I0729 17:34:36.628579   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628592   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628595   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628603   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628621   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.194611778s)
	I0729 17:34:36.628463   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628649   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628651   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628711   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.756664091s)
	I0729 17:34:36.628749   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628764   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628792   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.660013942s)
	I0729 17:34:36.628812   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628822   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628870   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.611408784s)
	I0729 17:34:36.628893   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628902   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.628905   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.600998026s)
	I0729 17:34:36.628921   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.628932   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.629003   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.490964528s)
	I0729 17:34:36.629021   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.629030   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.629160   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.161498941s)
	W0729 17:34:36.629191   96181 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 17:34:36.629229   96181 retry.go:31] will retry after 245.713684ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 17:34:36.629318   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.501441932s)
	I0729 17:34:36.629339   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.629349   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.631025   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632743   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632745   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632748   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632762   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632777   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632782   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632781   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632787   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632795   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632798   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632800   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632801   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632810   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632820   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632822   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632832   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632787   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632840   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632833   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.44977425s)
	I0729 17:34:36.632848   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632872   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632879   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632884   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632807   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632901   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632803   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632885   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632931   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.632935   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632939   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632944   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632950   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632844   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632960   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632961   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632970   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632972   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632977   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632980   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632985   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632987   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632993   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.632841   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.632834   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633012   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633021   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.633028   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633032   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633037   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633039   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633044   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633052   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.632961   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633060   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.632951   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633195   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633205   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633371   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.633405   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633412   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633437   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.633471   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633525   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.633550   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633556   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633618   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.633661   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633669   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633678   96181 addons.go:475] Verifying addon metrics-server=true in "addons-145541"
	I0729 17:34:36.633004   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.633714   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.633771   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.633792   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.633798   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.633805   96181 addons.go:475] Verifying addon ingress=true in "addons-145541"
	I0729 17:34:36.633974   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.633999   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.634009   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.634020   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.634041   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.634047   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.634000   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.634056   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.634067   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.634076   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.634083   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.634894   96181 out.go:177] * Verifying ingress addon...
	I0729 17:34:36.634935   96181 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-145541 service yakd-dashboard -n yakd-dashboard
	
	I0729 17:34:36.635404   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.635445   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.635455   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.637423   96181 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 17:34:36.638717   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.638727   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.638743   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.638740   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.638772   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.638785   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.638800   96181 addons.go:475] Verifying addon registry=true in "addons-145541"
	I0729 17:34:36.640270   96181 out.go:177] * Verifying registry addon...
	I0729 17:34:36.642131   96181 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 17:34:36.668697   96181 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 17:34:36.668720   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:36.668942   96181 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 17:34:36.668970   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:36.678386   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.678402   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.678686   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.678704   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 17:34:36.678809   96181 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0729 17:34:36.679325   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:36.679346   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:36.679564   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:36.679582   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:36.679613   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:36.875760   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 17:34:37.163475   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:37.184096   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:37.656128   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:37.663763   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:37.751958   96181 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.350452991s)
	I0729 17:34:37.753575   96181 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 17:34:37.754524   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.01941461s)
	I0729 17:34:37.754566   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:37.754575   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:37.754797   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:37.754811   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:37.754824   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:37.754833   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:37.755249   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:37.755278   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:37.755296   96181 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-145541"
	I0729 17:34:37.756975   96181 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 17:34:37.756987   96181 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 17:34:37.758979   96181 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 17:34:37.759000   96181 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 17:34:37.759769   96181 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 17:34:37.784163   96181 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 17:34:37.784184   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:37.860777   96181 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 17:34:37.860807   96181 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 17:34:37.963931   96181 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 17:34:37.963957   96181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 17:34:38.080753   96181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 17:34:38.151199   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:38.154588   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:38.274772   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:38.404938   96181 pod_ready.go:102] pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace has status "Ready":"False"
	I0729 17:34:38.645325   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:38.657233   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:38.768450   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:39.037267   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.161456531s)
	I0729 17:34:39.037318   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:39.037330   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:39.037699   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:39.037725   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:39.037737   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:39.037746   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:39.038038   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:39.038073   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:39.038118   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:39.141834   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:39.146261   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:39.287885   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:39.667052   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:39.667092   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:39.670555   96181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.589766398s)
	I0729 17:34:39.670599   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:39.670616   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:39.670902   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:39.670917   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:39.670925   96181 main.go:141] libmachine: Making call to close driver server
	I0729 17:34:39.670933   96181 main.go:141] libmachine: (addons-145541) Calling .Close
	I0729 17:34:39.671153   96181 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:34:39.671172   96181 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:34:39.671197   96181 main.go:141] libmachine: (addons-145541) DBG | Closing plugin on server side
	I0729 17:34:39.672658   96181 addons.go:475] Verifying addon gcp-auth=true in "addons-145541"
	I0729 17:34:39.674354   96181 out.go:177] * Verifying gcp-auth addon...
	I0729 17:34:39.676684   96181 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 17:34:39.690783   96181 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 17:34:39.690802   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:39.765908   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:40.141745   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:40.150619   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:40.182626   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:40.265732   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:40.395430   96181 pod_ready.go:97] pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.242 HostIPs:[{IP:192.168.39
.242}] PodIP: PodIPs:[] StartTime:2024-07-29 17:34:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 17:34:31 +0000 UTC,FinishedAt:2024-07-29 17:34:37 +0000 UTC,ContainerID:cri-o://6df697aa3396b53386356e783a4377c6f6409f3fb0bccf2f93c6fa4920cbc2f6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://6df697aa3396b53386356e783a4377c6f6409f3fb0bccf2f93c6fa4920cbc2f6 Started:0xc002285200 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 17:34:40.395465   96181 pod_ready.go:81] duration metric: took 6.506738286s for pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace to be "Ready" ...
	E0729 17:34:40.395478   96181 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-sn87l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 17:34:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.242 HostIPs:[{IP:192.168.39.242}] PodIP: PodIPs:[] StartTime:2024-07-29 17:34:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 17:34:31 +0000 UTC,FinishedAt:2024-07-29 17:34:37 +0000 UTC,ContainerID:cri-o://6df697aa3396b53386356e783a4377c6f6409f3fb0bccf2f93c6fa4920cbc2f6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://6df697aa3396b53386356e783a4377c6f6409f3fb0bccf2f93c6fa4920cbc2f6 Started:0xc002285200 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 17:34:40.395487   96181 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.402702   96181 pod_ready.go:92] pod "etcd-addons-145541" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:40.402723   96181 pod_ready.go:81] duration metric: took 7.226843ms for pod "etcd-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.402734   96181 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.409378   96181 pod_ready.go:92] pod "kube-apiserver-addons-145541" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:40.409399   96181 pod_ready.go:81] duration metric: took 6.656909ms for pod "kube-apiserver-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.409409   96181 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.417324   96181 pod_ready.go:92] pod "kube-controller-manager-addons-145541" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:40.417342   96181 pod_ready.go:81] duration metric: took 7.925291ms for pod "kube-controller-manager-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.417352   96181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v6sd2" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.424380   96181 pod_ready.go:92] pod "kube-proxy-v6sd2" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:40.424399   96181 pod_ready.go:81] duration metric: took 7.039978ms for pod "kube-proxy-v6sd2" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.424409   96181 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.642506   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:40.647058   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:40.680715   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:40.767396   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:40.793560   96181 pod_ready.go:92] pod "kube-scheduler-addons-145541" in "kube-system" namespace has status "Ready":"True"
	I0729 17:34:40.793587   96181 pod_ready.go:81] duration metric: took 369.170033ms for pod "kube-scheduler-addons-145541" in "kube-system" namespace to be "Ready" ...
	I0729 17:34:40.793598   96181 pod_ready.go:38] duration metric: took 12.449299757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:34:40.793617   96181 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:34:40.793683   96181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:34:40.860217   96181 api_server.go:72] duration metric: took 13.19682114s to wait for apiserver process to appear ...
	I0729 17:34:40.860252   96181 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:34:40.860280   96181 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0729 17:34:40.866260   96181 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I0729 17:34:40.868136   96181 api_server.go:141] control plane version: v1.30.3
	I0729 17:34:40.868163   96181 api_server.go:131] duration metric: took 7.902752ms to wait for apiserver health ...
	I0729 17:34:40.868171   96181 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 17:34:40.999623   96181 system_pods.go:59] 19 kube-system pods found
	I0729 17:34:40.999668   96181 system_pods.go:61] "coredns-7db6d8ff4d-dfrfm" [8f7f3dfc-f445-447d-8b0f-f9768984eff7] Running
	I0729 17:34:40.999678   96181 system_pods.go:61] "coredns-7db6d8ff4d-sn87l" [d46ccfce-d103-42de-a6ae-00bf710b59a3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0729 17:34:40.999686   96181 system_pods.go:61] "csi-hostpath-attacher-0" [16a60d4b-4133-4f9e-ae7d-b4abafb1c2e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 17:34:40.999692   96181 system_pods.go:61] "csi-hostpath-resizer-0" [fe8391ab-3ece-485a-812b-3821cd2dbbcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 17:34:40.999700   96181 system_pods.go:61] "csi-hostpathplugin-p9qp9" [2c479653-5761-44ea-8d45-514170d3db15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 17:34:40.999704   96181 system_pods.go:61] "etcd-addons-145541" [099a502a-2c8f-42c2-87dc-361eae8baa07] Running
	I0729 17:34:40.999709   96181 system_pods.go:61] "kube-apiserver-addons-145541" [04ca4891-47d7-45eb-a209-60d485c67801] Running
	I0729 17:34:40.999714   96181 system_pods.go:61] "kube-controller-manager-addons-145541" [be6f595a-7b71-4995-a979-12490e8d99d4] Running
	I0729 17:34:40.999723   96181 system_pods.go:61] "kube-ingress-dns-minikube" [dc7be156-e078-4c48-931f-5daba154a3f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 17:34:40.999727   96181 system_pods.go:61] "kube-proxy-v6sd2" [4a80c5a1-59ca-4e68-b237-5e7e03f8c23e] Running
	I0729 17:34:40.999732   96181 system_pods.go:61] "kube-scheduler-addons-145541" [31414739-297a-4811-9da1-c9a50a3ac824] Running
	I0729 17:34:40.999741   96181 system_pods.go:61] "metrics-server-c59844bb4-twcpr" [729a1011-260e-49bc-9fe9-0f5a13a4f5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 17:34:40.999750   96181 system_pods.go:61] "nvidia-device-plugin-daemonset-4gjrg" [3288c0c8-9742-44dc-985f-33455a462b79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 17:34:40.999763   96181 system_pods.go:61] "registry-698f998955-9qnhg" [ca8784f3-5a3c-4e49-b99f-0f6a32e7c737] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 17:34:40.999771   96181 system_pods.go:61] "registry-proxy-dgtch" [621f0921-7ec4-4046-b693-3dd1b6619b44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 17:34:40.999784   96181 system_pods.go:61] "snapshot-controller-745499f584-hghjr" [3753f85c-83f0-4f02-962f-8bcd30183cc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 17:34:40.999790   96181 system_pods.go:61] "snapshot-controller-745499f584-r6j6p" [f4dd2b5d-ada4-4612-a5bc-63c97bc31200] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 17:34:40.999795   96181 system_pods.go:61] "storage-provisioner" [5cd58c8b-201b-433a-917f-1382e5a8fa0a] Running
	I0729 17:34:40.999800   96181 system_pods.go:61] "tiller-deploy-6677d64bcd-d7vqp" [01075b35-8252-425f-8fc5-05b87bfaccdb] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 17:34:40.999806   96181 system_pods.go:74] duration metric: took 131.629741ms to wait for pod list to return data ...
	I0729 17:34:40.999815   96181 default_sa.go:34] waiting for default service account to be created ...
	I0729 17:34:41.142635   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:41.146329   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:41.180187   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:41.191916   96181 default_sa.go:45] found service account: "default"
	I0729 17:34:41.191940   96181 default_sa.go:55] duration metric: took 192.11702ms for default service account to be created ...
	I0729 17:34:41.191951   96181 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 17:34:41.271129   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:41.400020   96181 system_pods.go:86] 19 kube-system pods found
	I0729 17:34:41.400062   96181 system_pods.go:89] "coredns-7db6d8ff4d-dfrfm" [8f7f3dfc-f445-447d-8b0f-f9768984eff7] Running
	I0729 17:34:41.400075   96181 system_pods.go:89] "coredns-7db6d8ff4d-sn87l" [d46ccfce-d103-42de-a6ae-00bf710b59a3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0729 17:34:41.400086   96181 system_pods.go:89] "csi-hostpath-attacher-0" [16a60d4b-4133-4f9e-ae7d-b4abafb1c2e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 17:34:41.400095   96181 system_pods.go:89] "csi-hostpath-resizer-0" [fe8391ab-3ece-485a-812b-3821cd2dbbcc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 17:34:41.400112   96181 system_pods.go:89] "csi-hostpathplugin-p9qp9" [2c479653-5761-44ea-8d45-514170d3db15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 17:34:41.400123   96181 system_pods.go:89] "etcd-addons-145541" [099a502a-2c8f-42c2-87dc-361eae8baa07] Running
	I0729 17:34:41.400133   96181 system_pods.go:89] "kube-apiserver-addons-145541" [04ca4891-47d7-45eb-a209-60d485c67801] Running
	I0729 17:34:41.400143   96181 system_pods.go:89] "kube-controller-manager-addons-145541" [be6f595a-7b71-4995-a979-12490e8d99d4] Running
	I0729 17:34:41.400155   96181 system_pods.go:89] "kube-ingress-dns-minikube" [dc7be156-e078-4c48-931f-5daba154a3f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 17:34:41.400164   96181 system_pods.go:89] "kube-proxy-v6sd2" [4a80c5a1-59ca-4e68-b237-5e7e03f8c23e] Running
	I0729 17:34:41.400174   96181 system_pods.go:89] "kube-scheduler-addons-145541" [31414739-297a-4811-9da1-c9a50a3ac824] Running
	I0729 17:34:41.400186   96181 system_pods.go:89] "metrics-server-c59844bb4-twcpr" [729a1011-260e-49bc-9fe9-0f5a13a4f5d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 17:34:41.400200   96181 system_pods.go:89] "nvidia-device-plugin-daemonset-4gjrg" [3288c0c8-9742-44dc-985f-33455a462b79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 17:34:41.400212   96181 system_pods.go:89] "registry-698f998955-9qnhg" [ca8784f3-5a3c-4e49-b99f-0f6a32e7c737] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 17:34:41.400225   96181 system_pods.go:89] "registry-proxy-dgtch" [621f0921-7ec4-4046-b693-3dd1b6619b44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 17:34:41.400237   96181 system_pods.go:89] "snapshot-controller-745499f584-hghjr" [3753f85c-83f0-4f02-962f-8bcd30183cc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 17:34:41.400252   96181 system_pods.go:89] "snapshot-controller-745499f584-r6j6p" [f4dd2b5d-ada4-4612-a5bc-63c97bc31200] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 17:34:41.400261   96181 system_pods.go:89] "storage-provisioner" [5cd58c8b-201b-433a-917f-1382e5a8fa0a] Running
	I0729 17:34:41.400275   96181 system_pods.go:89] "tiller-deploy-6677d64bcd-d7vqp" [01075b35-8252-425f-8fc5-05b87bfaccdb] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 17:34:41.400287   96181 system_pods.go:126] duration metric: took 208.329309ms to wait for k8s-apps to be running ...
	I0729 17:34:41.400301   96181 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 17:34:41.400360   96181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:34:41.439928   96181 system_svc.go:56] duration metric: took 39.616511ms WaitForService to wait for kubelet
	I0729 17:34:41.439963   96181 kubeadm.go:582] duration metric: took 13.776574462s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:34:41.439988   96181 node_conditions.go:102] verifying NodePressure condition ...
	I0729 17:34:41.592232   96181 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:34:41.592258   96181 node_conditions.go:123] node cpu capacity is 2
	I0729 17:34:41.592283   96181 node_conditions.go:105] duration metric: took 152.288045ms to run NodePressure ...
	I0729 17:34:41.592295   96181 start.go:241] waiting for startup goroutines ...
	I0729 17:34:41.642322   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:41.645934   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:41.681116   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:41.765411   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:42.142042   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:42.146201   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:42.179730   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:42.266636   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:42.643003   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:42.646635   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:42.682076   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:42.766376   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:43.142828   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:43.146614   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:43.180012   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:43.266046   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:43.895194   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:43.900493   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:43.900902   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:43.901028   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:44.142049   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:44.145984   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:44.180529   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:44.265659   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:44.641923   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:44.645856   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:44.680141   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:44.765331   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:45.142535   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:45.146534   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:45.180404   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:45.265739   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:45.643531   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:45.647141   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:45.679501   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:45.765897   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:46.142419   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:46.145879   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:46.180597   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:46.269383   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:46.642821   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:46.646409   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:46.679978   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:46.765922   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:47.142590   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:47.146035   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:47.181523   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:47.265814   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:47.642041   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:47.645866   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:47.680154   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:47.766057   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:48.141874   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:48.146091   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:48.179403   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:48.264883   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:48.642596   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:48.646002   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:48.680429   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:48.765396   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:49.141941   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:49.146080   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:49.180795   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:49.265919   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:49.642505   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:49.646207   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:49.680047   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:49.766424   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:50.141918   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:50.146888   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:50.180319   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:50.265357   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:50.644960   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:50.647799   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:50.680721   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:50.767039   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:51.142107   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:51.146280   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:51.180408   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:51.266235   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:51.641530   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:51.647993   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:51.680683   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:51.766915   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:52.142562   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:52.146426   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:52.180283   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:52.266400   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:52.642206   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:52.645623   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:52.680974   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:52.765634   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:53.143654   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:53.146889   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:53.180360   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:53.266094   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:53.641985   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:53.645989   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:53.681269   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:53.766237   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:54.142398   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:54.145980   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:54.181251   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:54.265538   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:54.641958   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:54.646251   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:54.680092   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:54.766712   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:55.142666   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:55.146684   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:55.180356   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:55.269281   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:55.642303   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:55.645870   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:55.680805   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:55.765997   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:56.141536   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:56.146675   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:56.180682   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:56.267076   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:56.641901   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:56.645791   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:56.680533   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:56.765344   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:57.149630   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:57.156147   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:57.185555   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:57.265917   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:57.651568   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:57.651793   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:57.680335   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:57.765655   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:58.141765   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:58.146595   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:58.182744   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:58.266033   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:58.642303   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:58.646295   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:58.680606   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:58.765297   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:59.149389   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:59.158949   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:59.180376   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:59.266788   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:34:59.642723   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:34:59.646529   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:34:59.679841   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:34:59.765415   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:00.344078   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:00.345906   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:00.346235   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:00.346560   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:00.642289   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:00.646284   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:00.680182   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:00.765192   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:01.149944   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:01.150244   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:01.182627   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:01.267727   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:01.641411   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:01.650940   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:01.684641   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:01.765121   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:02.141568   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:02.146662   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:02.180968   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:02.265413   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:02.642449   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:02.646970   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:02.681136   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:02.765439   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:03.453597   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:03.468694   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:03.468821   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:03.469331   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:03.642132   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:03.645967   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:03.680947   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:03.765762   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:04.142496   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:04.145835   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:04.180627   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:04.266328   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:04.642584   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:04.647185   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:04.679612   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:04.766097   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:05.142491   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:05.145837   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:05.180643   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:05.265618   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:05.643236   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:05.647598   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:05.680905   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:05.767771   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:06.142622   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:06.146661   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 17:35:06.180035   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:06.267945   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:06.643511   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:06.645819   96181 kapi.go:107] duration metric: took 30.003687026s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 17:35:06.680238   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:06.764611   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:07.142674   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:07.180135   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:07.265298   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:07.641659   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:07.680304   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:07.765826   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:08.141640   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:08.179892   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:08.266283   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:08.641673   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:08.680160   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:08.765394   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:09.142145   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:09.181264   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:09.265756   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:09.642904   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:09.680726   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:09.766968   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:10.142682   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:10.179795   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:10.265809   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:10.642808   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:10.679861   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:10.766280   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:11.145309   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:11.180503   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:11.265328   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:11.642148   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:11.681102   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:11.765401   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:12.145261   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:12.184415   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:12.265067   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:12.646269   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:12.680921   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:12.765545   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:13.142077   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:13.179947   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:13.266380   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:13.648815   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:13.680348   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:13.772316   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:14.141778   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:14.183434   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:14.267831   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:14.642492   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:14.681054   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:14.767405   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:15.142071   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:15.180397   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:15.265421   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:15.641250   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:15.681287   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:15.775225   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:16.142492   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:16.181790   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:16.267318   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:16.649522   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:16.680641   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:16.765647   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:17.143177   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:17.180243   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:17.267616   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:17.642549   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:17.681067   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:17.766047   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:18.141433   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:18.180782   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:18.265518   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:18.641952   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:18.680531   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:18.765334   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:19.141594   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:19.182782   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:19.271071   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:19.642145   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:19.681035   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:19.789295   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:20.141991   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:20.181018   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:20.265265   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:20.642228   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:20.681107   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:20.764842   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:21.142461   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:21.180830   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:21.265307   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:21.641431   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:21.679864   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:21.772918   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:22.142889   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:22.180107   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:22.264754   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:22.642473   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:22.680284   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:22.765684   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:23.141718   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:23.179821   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:23.266085   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:23.642736   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:23.680145   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:23.765225   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:24.142822   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:24.180052   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:24.264695   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:24.647554   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:24.681147   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:24.765062   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:25.141799   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:25.184376   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:25.412853   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:25.815023   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:25.816718   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:25.823066   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:26.146922   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:26.181724   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:26.272985   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:26.649030   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:26.681924   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:26.770149   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:27.145187   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:27.186222   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:27.271442   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:27.655970   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:27.699385   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:27.767076   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:28.142799   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:28.181233   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:28.265898   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:28.642260   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:28.680539   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:28.766100   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:29.144697   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:29.181013   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:29.266413   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:29.642698   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:29.681604   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:29.775178   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:30.143037   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:30.187310   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:30.269140   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:30.644265   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:30.680902   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:30.773928   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:31.141376   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:31.179576   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:31.265488   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:31.642794   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:31.682983   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:31.765572   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:32.213013   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:32.221393   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:32.265827   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:32.642235   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:32.680993   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:32.766247   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:33.143765   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:33.180482   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:33.265684   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:33.642411   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:33.679628   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:33.765310   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:34.142004   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:34.180646   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:34.265523   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 17:35:34.642099   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:34.680535   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:34.765206   96181 kapi.go:107] duration metric: took 57.005436432s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 17:35:35.141598   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:35.179973   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:35.642480   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:35.680619   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:36.142125   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:36.180337   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:36.642207   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:36.680947   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:37.142740   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:37.180710   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:37.642395   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:37.680302   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:38.141750   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:38.180260   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:38.642990   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:38.680173   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:39.142908   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:39.180605   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:39.641753   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:39.680444   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:40.142046   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:40.180390   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:40.641867   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:40.680204   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:41.142555   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:41.181054   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:41.641868   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:41.680777   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:42.141941   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:42.180415   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:42.644029   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:42.682083   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:43.142873   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:43.180201   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:43.642245   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:43.680750   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:44.145621   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:44.194008   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:44.698450   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:44.698978   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:45.141928   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:45.181304   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:45.641407   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:45.679984   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:46.142280   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:46.180354   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:46.642327   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:46.680775   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:47.142404   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:47.180148   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:47.642590   96181 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 17:35:47.687135   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:48.144659   96181 kapi.go:107] duration metric: took 1m11.507233239s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 17:35:48.181780   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:48.680723   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:49.180312   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:49.680765   96181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 17:35:50.181716   96181 kapi.go:107] duration metric: took 1m10.505030968s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 17:35:50.183302   96181 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-145541 cluster.
	I0729 17:35:50.184505   96181 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 17:35:50.185759   96181 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 17:35:50.186964   96181 out.go:177] * Enabled addons: storage-provisioner, metrics-server, inspektor-gadget, cloud-spanner, helm-tiller, nvidia-device-plugin, yakd, ingress-dns, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 17:35:50.188081   96181 addons.go:510] duration metric: took 1m22.524677806s for enable addons: enabled=[storage-provisioner metrics-server inspektor-gadget cloud-spanner helm-tiller nvidia-device-plugin yakd ingress-dns default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 17:35:50.188116   96181 start.go:246] waiting for cluster config update ...
	I0729 17:35:50.188134   96181 start.go:255] writing updated cluster config ...
	I0729 17:35:50.188361   96181 ssh_runner.go:195] Run: rm -f paused
	I0729 17:35:50.239638   96181 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 17:35:50.241190   96181 out.go:177] * Done! kubectl is now configured to use "addons-145541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.245046808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a00b9889-72b3-4b21-a9ed-15414309b9fa name=/runtime.v1.RuntimeService/Version
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.246469687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8bf12101-3717-41b1-aeaa-cba427ad6be0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.247638883Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274910247615927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8bf12101-3717-41b1-aeaa-cba427ad6be0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.248654068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccb115a0-37c7-4f4d-a37a-21be50a05354 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.248765295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccb115a0-37c7-4f4d-a37a-21be50a05354 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.249718955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99826bc575a85c1e96d46d7f26df13d5cb2a1b898fc326b32d48a7efd8f02831,PodSandboxId:b6f81bcd48db61778a9dd9460087d5147b3cc4196c0e718f8f0bf3f9d8dc3761,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722274740520038207,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-t9gs9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6444606c-a772-4e9c-b313-7187e9758717,},Annotations:map[string]string{io.kubernetes.container.hash: 38f41b7a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c8d95747fbf6ab117058c4b7438f0171e31e4257572c17ea84b6e432a18d0c,PodSandboxId:9ea04a532b4f2b2058f4213f1ff0c1db9ac2096d0f0bb766a0161f85b92aebd2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722274601285458508,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9cfcd35-b093-46c2-ae44-2c916c5de80b,},Annotations:map[string]string{io.kubernet
es.container.hash: ef45ff76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8deb3bdf34f3796bc8ec204614f887b4343ab93caee7b197db99f1d17370133f,PodSandboxId:8a766dd437481288895dc64c3f05ab8ad95b913c9d8950ef174218fe8ec4c5e9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722274590108060674,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-nvghv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 011cd85f-b07b-46e4-b4bd-1f47b7dc24df,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf92d6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c8df7c06a7c24a7a18e22ca66754de29c01c30cfdcb57f30806e3d575cc724,PodSandboxId:6e4334647505cf853335a0f6cd454ceb570e606fd9cb00f65d9a2fb86165f4f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722274554709310587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8d7938-0a77-4cff-9e81-6455967a4c76,},Annotations:map[string]string{io.kubernetes.container.hash: 9a603d37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473edccf5c58cf2c9f0a63af99d7ed0d95c2ec5507062cdd22dab7e3e693d54,PodSandboxId:581e355dec7028fdbdb7314e552979106a837b66baffba26a126481edc270958,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722274517031656954,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-wkvlg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2081ac21-9f79-448b-ae42-1077fae38ef9,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce71f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7001264b698e30818eff99cbeb83a51439e58115fd60a99f663876ceb6e7,PodSandboxId:887eb3b237ee12aa9cccc13ebbe16d1acc46aa8eab81d3d55fd3405f4b17d3b6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722274500463481
915,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-twcpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729a1011-260e-49bc-9fe9-0f5a13a4f5d7,},Annotations:map[string]string{io.kubernetes.container.hash: 8a02c318,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638d0f3fe4e51cdc6dae5202502b3489349eb03a0433187d89afa4f04258bb0,PodSandboxId:ae659103a1f4af5c82dd9058adbadc9d1989d7dc646f730d1c896b394920d657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274474327653971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd58c8b-201b-433a-917f-1382e5a8fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 13db8ad4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f0536849e4d33ef5404799f1da0c33720395f13b1a5e639f3cdc93f8bff5e8,PodSandboxId:2989fd014413395f0ff80529c0cefc21fe42f921b6c62504d05c51a46df43c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274471486750466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dfrfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7f3dfc-f445-447d-8b0f-f9768984eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 5355ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9a7cd1c02e6cfdb6d03cbb5630580a6a0eafef14ebdbae1a480ee236727e02,PodSandboxId:5d4162abe009f4c97046ae8539bd5ef003e3343df2f7d2941a1cc5263d691835,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274468856613349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6sd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a80c5a1-59ca-4e68-b237-5e7e03f8c23e,},Annotations:map[string]string{io.kubernetes.container.hash: d5cf5e75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954122fb41ccc0077861429e96b58887533974d1bc428b25958d6a47ceda9b78,PodSandboxId:f823acb8f6efe7e1f3e963afed288e21bc166e055f7c714360ecae2f989e96eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7e
b138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274449044766573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83fe687db118bee4366eb90c9db19856,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905e9468d35c61aa82eeec9e1d23a5f0b7a51d206140d497222588fd6846d273,PodSandboxId:b5704093a7247f101b286297047314216bef4d86693be84020998c11672e1615,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274449015587187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dce093dfe1a1321ec268f1e97babb4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a81e90aed1439aa9dbccf7dec2273f8264cb1879c7b67c6ac17688632472920,PodSandboxId:eb740979212835e357a0d7a2b7422172cff1a1c49d65450b42492fe408cacc5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527e
f0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274449025150954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d5145776697d68caff9315eb068a51,},Annotations:map[string]string{io.kubernetes.container.hash: f61661de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4e3a799006df5d6b91d4a312bbf6b40762d89407b95a4d33f84e0e3504b98,PodSandboxId:bcb11c13549f7843a069467b69c7682192c250a03b5c3357cc5f80040a177f80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274449021513077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632974a84e8c8f2d87e51d80d84b674c,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccb115a0-37c7-4f4d-a37a-21be50a05354 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.252560208Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa925f6a-3754-477f-af75-edf6dc6afdb2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.252855858Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b6f81bcd48db61778a9dd9460087d5147b3cc4196c0e718f8f0bf3f9d8dc3761,Metadata:&PodSandboxMetadata{Name:hello-world-app-6778b5fc9f-t9gs9,Uid:6444606c-a772-4e9c-b313-7187e9758717,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274739546256917,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-t9gs9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6444606c-a772-4e9c-b313-7187e9758717,pod-template-hash: 6778b5fc9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:38:59.232422102Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ea04a532b4f2b2058f4213f1ff0c1db9ac2096d0f0bb766a0161f85b92aebd2,Metadata:&PodSandboxMetadata{Name:nginx,Uid:b9cfcd35-b093-46c2-ae44-2c916c5de80b,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1722274599057119100,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9cfcd35-b093-46c2-ae44-2c916c5de80b,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:36:38.508468789Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a766dd437481288895dc64c3f05ab8ad95b913c9d8950ef174218fe8ec4c5e9,Metadata:&PodSandboxMetadata{Name:headlamp-7867546754-nvghv,Uid:011cd85f-b07b-46e4-b4bd-1f47b7dc24df,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274585231918013,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7867546754-nvghv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 011cd85f-b07b-46e4-b4bd-1f47b7dc24df,pod-template-hash: 7867546754,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
07-29T17:36:24.918464220Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e4334647505cf853335a0f6cd454ceb570e606fd9cb00f65d9a2fb86165f4f9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:9b8d7938-0a77-4cff-9e81-6455967a4c76,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274553582821898,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8d7938-0a77-4cff-9e81-6455967a4c76,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:35:53.271332269Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:887eb3b237ee12aa9cccc13ebbe16d1acc46aa8eab81d3d55fd3405f4b17d3b6,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-twcpr,Uid:729a1011-260e-49bc-9fe9-0f5a13a4f5d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274473866320188,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.po
d.name: metrics-server-c59844bb4-twcpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729a1011-260e-49bc-9fe9-0f5a13a4f5d7,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:34:33.532532145Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:581e355dec7028fdbdb7314e552979106a837b66baffba26a126481edc270958,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-8d985888d-wkvlg,Uid:2081ac21-9f79-448b-ae42-1077fae38ef9,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274473848546995,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-8d985888d-wkvlg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2081ac21-9f79-448b-ae42-1077fae38ef9,pod-template-hash: 8d985888d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:34:33.181614203Z,kubernetes.io/config.sourc
e: api,},RuntimeHandler:,},&PodSandbox{Id:ae659103a1f4af5c82dd9058adbadc9d1989d7dc646f730d1c896b394920d657,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5cd58c8b-201b-433a-917f-1382e5a8fa0a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274473041926430,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd58c8b-201b-433a-917f-1382e5a8fa0a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"i
magePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T17:34:32.710356887Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2989fd014413395f0ff80529c0cefc21fe42f921b6c62504d05c51a46df43c66,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-dfrfm,Uid:8f7f3dfc-f445-447d-8b0f-f9768984eff7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274468403100599,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-dfrfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7f3dfc-f445-447d-8b0f-f9768984eff7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:34:28.094814145Z,kubernetes.
io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d4162abe009f4c97046ae8539bd5ef003e3343df2f7d2941a1cc5263d691835,Metadata:&PodSandboxMetadata{Name:kube-proxy-v6sd2,Uid:4a80c5a1-59ca-4e68-b237-5e7e03f8c23e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274468204191322,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-v6sd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a80c5a1-59ca-4e68-b237-5e7e03f8c23e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:34:27.897289930Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5704093a7247f101b286297047314216bef4d86693be84020998c11672e1615,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-145541,Uid:2dce093dfe1a1321ec268f1e97babb4b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274448837060293,Labels:map[string]string{component
: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dce093dfe1a1321ec268f1e97babb4b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2dce093dfe1a1321ec268f1e97babb4b,kubernetes.io/config.seen: 2024-07-29T17:34:08.371010576Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eb740979212835e357a0d7a2b7422172cff1a1c49d65450b42492fe408cacc5f,Metadata:&PodSandboxMetadata{Name:etcd-addons-145541,Uid:61d5145776697d68caff9315eb068a51,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274448829758491,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d5145776697d68caff9315eb068a51,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.242:2379,
kubernetes.io/config.hash: 61d5145776697d68caff9315eb068a51,kubernetes.io/config.seen: 2024-07-29T17:34:08.371004822Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f823acb8f6efe7e1f3e963afed288e21bc166e055f7c714360ecae2f989e96eb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-145541,Uid:83fe687db118bee4366eb90c9db19856,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274448828178567,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83fe687db118bee4366eb90c9db19856,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 83fe687db118bee4366eb90c9db19856,kubernetes.io/config.seen: 2024-07-29T17:34:08.371009599Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bcb11c13549f7843a069467b69c7682192c250a03b5c3357cc5f80040a177f80,Metadata:&PodSandboxMe
tadata{Name:kube-apiserver-addons-145541,Uid:632974a84e8c8f2d87e51d80d84b674c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722274448825190643,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632974a84e8c8f2d87e51d80d84b674c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.242:8443,kubernetes.io/config.hash: 632974a84e8c8f2d87e51d80d84b674c,kubernetes.io/config.seen: 2024-07-29T17:34:08.371008319Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fa925f6a-3754-477f-af75-edf6dc6afdb2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.253664055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d0795e1-b873-45b1-97a5-c2215469f169 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.253730046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d0795e1-b873-45b1-97a5-c2215469f169 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.254069432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99826bc575a85c1e96d46d7f26df13d5cb2a1b898fc326b32d48a7efd8f02831,PodSandboxId:b6f81bcd48db61778a9dd9460087d5147b3cc4196c0e718f8f0bf3f9d8dc3761,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722274740520038207,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-t9gs9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6444606c-a772-4e9c-b313-7187e9758717,},Annotations:map[string]string{io.kubernetes.container.hash: 38f41b7a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c8d95747fbf6ab117058c4b7438f0171e31e4257572c17ea84b6e432a18d0c,PodSandboxId:9ea04a532b4f2b2058f4213f1ff0c1db9ac2096d0f0bb766a0161f85b92aebd2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722274601285458508,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9cfcd35-b093-46c2-ae44-2c916c5de80b,},Annotations:map[string]string{io.kubernet
es.container.hash: ef45ff76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8deb3bdf34f3796bc8ec204614f887b4343ab93caee7b197db99f1d17370133f,PodSandboxId:8a766dd437481288895dc64c3f05ab8ad95b913c9d8950ef174218fe8ec4c5e9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722274590108060674,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-nvghv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 011cd85f-b07b-46e4-b4bd-1f47b7dc24df,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf92d6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c8df7c06a7c24a7a18e22ca66754de29c01c30cfdcb57f30806e3d575cc724,PodSandboxId:6e4334647505cf853335a0f6cd454ceb570e606fd9cb00f65d9a2fb86165f4f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722274554709310587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8d7938-0a77-4cff-9e81-6455967a4c76,},Annotations:map[string]string{io.kubernetes.container.hash: 9a603d37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473edccf5c58cf2c9f0a63af99d7ed0d95c2ec5507062cdd22dab7e3e693d54,PodSandboxId:581e355dec7028fdbdb7314e552979106a837b66baffba26a126481edc270958,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722274517031656954,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-wkvlg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2081ac21-9f79-448b-ae42-1077fae38ef9,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce71f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7001264b698e30818eff99cbeb83a51439e58115fd60a99f663876ceb6e7,PodSandboxId:887eb3b237ee12aa9cccc13ebbe16d1acc46aa8eab81d3d55fd3405f4b17d3b6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722274500463481
915,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-twcpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729a1011-260e-49bc-9fe9-0f5a13a4f5d7,},Annotations:map[string]string{io.kubernetes.container.hash: 8a02c318,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638d0f3fe4e51cdc6dae5202502b3489349eb03a0433187d89afa4f04258bb0,PodSandboxId:ae659103a1f4af5c82dd9058adbadc9d1989d7dc646f730d1c896b394920d657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274474327653971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd58c8b-201b-433a-917f-1382e5a8fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 13db8ad4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f0536849e4d33ef5404799f1da0c33720395f13b1a5e639f3cdc93f8bff5e8,PodSandboxId:2989fd014413395f0ff80529c0cefc21fe42f921b6c62504d05c51a46df43c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274471486750466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dfrfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7f3dfc-f445-447d-8b0f-f9768984eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 5355ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9a7cd1c02e6cfdb6d03cbb5630580a6a0eafef14ebdbae1a480ee236727e02,PodSandboxId:5d4162abe009f4c97046ae8539bd5ef003e3343df2f7d2941a1cc5263d691835,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274468856613349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6sd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a80c5a1-59ca-4e68-b237-5e7e03f8c23e,},Annotations:map[string]string{io.kubernetes.container.hash: d5cf5e75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954122fb41ccc0077861429e96b58887533974d1bc428b25958d6a47ceda9b78,PodSandboxId:f823acb8f6efe7e1f3e963afed288e21bc166e055f7c714360ecae2f989e96eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7e
b138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274449044766573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83fe687db118bee4366eb90c9db19856,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905e9468d35c61aa82eeec9e1d23a5f0b7a51d206140d497222588fd6846d273,PodSandboxId:b5704093a7247f101b286297047314216bef4d86693be84020998c11672e1615,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274449015587187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dce093dfe1a1321ec268f1e97babb4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a81e90aed1439aa9dbccf7dec2273f8264cb1879c7b67c6ac17688632472920,PodSandboxId:eb740979212835e357a0d7a2b7422172cff1a1c49d65450b42492fe408cacc5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527e
f0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274449025150954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d5145776697d68caff9315eb068a51,},Annotations:map[string]string{io.kubernetes.container.hash: f61661de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4e3a799006df5d6b91d4a312bbf6b40762d89407b95a4d33f84e0e3504b98,PodSandboxId:bcb11c13549f7843a069467b69c7682192c250a03b5c3357cc5f80040a177f80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274449021513077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632974a84e8c8f2d87e51d80d84b674c,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d0795e1-b873-45b1-97a5-c2215469f169 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.290182151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61cd2e12-1bb1-4a1d-b278-77058e1ea767 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.290279294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61cd2e12-1bb1-4a1d-b278-77058e1ea767 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.291713267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=203d5e85-9742-4eb9-9573-a901ea52fe4c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.293368671Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274910293292234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=203d5e85-9742-4eb9-9573-a901ea52fe4c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.294311767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2d6f726-6f23-4cc7-8df8-bbb4d25f1c36 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.294381033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2d6f726-6f23-4cc7-8df8-bbb4d25f1c36 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.294724323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99826bc575a85c1e96d46d7f26df13d5cb2a1b898fc326b32d48a7efd8f02831,PodSandboxId:b6f81bcd48db61778a9dd9460087d5147b3cc4196c0e718f8f0bf3f9d8dc3761,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722274740520038207,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-t9gs9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6444606c-a772-4e9c-b313-7187e9758717,},Annotations:map[string]string{io.kubernetes.container.hash: 38f41b7a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c8d95747fbf6ab117058c4b7438f0171e31e4257572c17ea84b6e432a18d0c,PodSandboxId:9ea04a532b4f2b2058f4213f1ff0c1db9ac2096d0f0bb766a0161f85b92aebd2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722274601285458508,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9cfcd35-b093-46c2-ae44-2c916c5de80b,},Annotations:map[string]string{io.kubernet
es.container.hash: ef45ff76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8deb3bdf34f3796bc8ec204614f887b4343ab93caee7b197db99f1d17370133f,PodSandboxId:8a766dd437481288895dc64c3f05ab8ad95b913c9d8950ef174218fe8ec4c5e9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722274590108060674,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-nvghv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 011cd85f-b07b-46e4-b4bd-1f47b7dc24df,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf92d6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c8df7c06a7c24a7a18e22ca66754de29c01c30cfdcb57f30806e3d575cc724,PodSandboxId:6e4334647505cf853335a0f6cd454ceb570e606fd9cb00f65d9a2fb86165f4f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722274554709310587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8d7938-0a77-4cff-9e81-6455967a4c76,},Annotations:map[string]string{io.kubernetes.container.hash: 9a603d37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473edccf5c58cf2c9f0a63af99d7ed0d95c2ec5507062cdd22dab7e3e693d54,PodSandboxId:581e355dec7028fdbdb7314e552979106a837b66baffba26a126481edc270958,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722274517031656954,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-wkvlg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2081ac21-9f79-448b-ae42-1077fae38ef9,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce71f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7001264b698e30818eff99cbeb83a51439e58115fd60a99f663876ceb6e7,PodSandboxId:887eb3b237ee12aa9cccc13ebbe16d1acc46aa8eab81d3d55fd3405f4b17d3b6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722274500463481
915,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-twcpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729a1011-260e-49bc-9fe9-0f5a13a4f5d7,},Annotations:map[string]string{io.kubernetes.container.hash: 8a02c318,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638d0f3fe4e51cdc6dae5202502b3489349eb03a0433187d89afa4f04258bb0,PodSandboxId:ae659103a1f4af5c82dd9058adbadc9d1989d7dc646f730d1c896b394920d657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274474327653971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd58c8b-201b-433a-917f-1382e5a8fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 13db8ad4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f0536849e4d33ef5404799f1da0c33720395f13b1a5e639f3cdc93f8bff5e8,PodSandboxId:2989fd014413395f0ff80529c0cefc21fe42f921b6c62504d05c51a46df43c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274471486750466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dfrfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7f3dfc-f445-447d-8b0f-f9768984eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 5355ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9a7cd1c02e6cfdb6d03cbb5630580a6a0eafef14ebdbae1a480ee236727e02,PodSandboxId:5d4162abe009f4c97046ae8539bd5ef003e3343df2f7d2941a1cc5263d691835,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274468856613349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6sd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a80c5a1-59ca-4e68-b237-5e7e03f8c23e,},Annotations:map[string]string{io.kubernetes.container.hash: d5cf5e75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954122fb41ccc0077861429e96b58887533974d1bc428b25958d6a47ceda9b78,PodSandboxId:f823acb8f6efe7e1f3e963afed288e21bc166e055f7c714360ecae2f989e96eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7e
b138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274449044766573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83fe687db118bee4366eb90c9db19856,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905e9468d35c61aa82eeec9e1d23a5f0b7a51d206140d497222588fd6846d273,PodSandboxId:b5704093a7247f101b286297047314216bef4d86693be84020998c11672e1615,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274449015587187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dce093dfe1a1321ec268f1e97babb4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a81e90aed1439aa9dbccf7dec2273f8264cb1879c7b67c6ac17688632472920,PodSandboxId:eb740979212835e357a0d7a2b7422172cff1a1c49d65450b42492fe408cacc5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527e
f0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274449025150954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d5145776697d68caff9315eb068a51,},Annotations:map[string]string{io.kubernetes.container.hash: f61661de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4e3a799006df5d6b91d4a312bbf6b40762d89407b95a4d33f84e0e3504b98,PodSandboxId:bcb11c13549f7843a069467b69c7682192c250a03b5c3357cc5f80040a177f80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274449021513077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632974a84e8c8f2d87e51d80d84b674c,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2d6f726-6f23-4cc7-8df8-bbb4d25f1c36 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.329239355Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e51b9f3-3e71-43c9-9ef4-5de82a5cc35d name=/runtime.v1.RuntimeService/Version
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.329308371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e51b9f3-3e71-43c9-9ef4-5de82a5cc35d name=/runtime.v1.RuntimeService/Version
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.330331303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d5f673e-cf4e-4d02-ab6b-d319d3cd5471 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.331661525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722274910331635444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d5f673e-cf4e-4d02-ab6b-d319d3cd5471 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.332292807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a0ebc6f-2b22-48cf-a031-f0645787bf85 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.332362751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a0ebc6f-2b22-48cf-a031-f0645787bf85 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:41:50 addons-145541 crio[688]: time="2024-07-29 17:41:50.332636850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:99826bc575a85c1e96d46d7f26df13d5cb2a1b898fc326b32d48a7efd8f02831,PodSandboxId:b6f81bcd48db61778a9dd9460087d5147b3cc4196c0e718f8f0bf3f9d8dc3761,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722274740520038207,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-t9gs9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6444606c-a772-4e9c-b313-7187e9758717,},Annotations:map[string]string{io.kubernetes.container.hash: 38f41b7a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c8d95747fbf6ab117058c4b7438f0171e31e4257572c17ea84b6e432a18d0c,PodSandboxId:9ea04a532b4f2b2058f4213f1ff0c1db9ac2096d0f0bb766a0161f85b92aebd2,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722274601285458508,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9cfcd35-b093-46c2-ae44-2c916c5de80b,},Annotations:map[string]string{io.kubernet
es.container.hash: ef45ff76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8deb3bdf34f3796bc8ec204614f887b4343ab93caee7b197db99f1d17370133f,PodSandboxId:8a766dd437481288895dc64c3f05ab8ad95b913c9d8950ef174218fe8ec4c5e9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1722274590108060674,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-nvghv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 011cd85f-b07b-46e4-b4bd-1f47b7dc24df,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf92d6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c8df7c06a7c24a7a18e22ca66754de29c01c30cfdcb57f30806e3d575cc724,PodSandboxId:6e4334647505cf853335a0f6cd454ceb570e606fd9cb00f65d9a2fb86165f4f9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722274554709310587,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b8d7938-0a77-4cff-9e81-6455967a4c76,},Annotations:map[string]string{io.kubernetes.container.hash: 9a603d37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1473edccf5c58cf2c9f0a63af99d7ed0d95c2ec5507062cdd22dab7e3e693d54,PodSandboxId:581e355dec7028fdbdb7314e552979106a837b66baffba26a126481edc270958,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722274517031656954,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-wkvlg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2081ac21-9f79-448b-ae42-1077fae38ef9,},Annotations:map[string]string{io.kubernetes.container.hash: f9ce71f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7001264b698e30818eff99cbeb83a51439e58115fd60a99f663876ceb6e7,PodSandboxId:887eb3b237ee12aa9cccc13ebbe16d1acc46aa8eab81d3d55fd3405f4b17d3b6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722274500463481
915,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-twcpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 729a1011-260e-49bc-9fe9-0f5a13a4f5d7,},Annotations:map[string]string{io.kubernetes.container.hash: 8a02c318,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638d0f3fe4e51cdc6dae5202502b3489349eb03a0433187d89afa4f04258bb0,PodSandboxId:ae659103a1f4af5c82dd9058adbadc9d1989d7dc646f730d1c896b394920d657,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722274474327653971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd58c8b-201b-433a-917f-1382e5a8fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: 13db8ad4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f0536849e4d33ef5404799f1da0c33720395f13b1a5e639f3cdc93f8bff5e8,PodSandboxId:2989fd014413395f0ff80529c0cefc21fe42f921b6c62504d05c51a46df43c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722274471486750466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dfrfm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7f3dfc-f445-447d-8b0f-f9768984eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 5355ebe7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9a7cd1c02e6cfdb6d03cbb5630580a6a0eafef14ebdbae1a480ee236727e02,PodSandboxId:5d4162abe009f4c97046ae8539bd5ef003e3343df2f7d2941a1cc5263d691835,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722274468856613349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v6sd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a80c5a1-59ca-4e68-b237-5e7e03f8c23e,},Annotations:map[string]string{io.kubernetes.container.hash: d5cf5e75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:954122fb41ccc0077861429e96b58887533974d1bc428b25958d6a47ceda9b78,PodSandboxId:f823acb8f6efe7e1f3e963afed288e21bc166e055f7c714360ecae2f989e96eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7e
b138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722274449044766573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83fe687db118bee4366eb90c9db19856,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905e9468d35c61aa82eeec9e1d23a5f0b7a51d206140d497222588fd6846d273,PodSandboxId:b5704093a7247f101b286297047314216bef4d86693be84020998c11672e1615,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722274449015587187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dce093dfe1a1321ec268f1e97babb4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a81e90aed1439aa9dbccf7dec2273f8264cb1879c7b67c6ac17688632472920,PodSandboxId:eb740979212835e357a0d7a2b7422172cff1a1c49d65450b42492fe408cacc5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527e
f0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722274449025150954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61d5145776697d68caff9315eb068a51,},Annotations:map[string]string{io.kubernetes.container.hash: f61661de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b4e3a799006df5d6b91d4a312bbf6b40762d89407b95a4d33f84e0e3504b98,PodSandboxId:bcb11c13549f7843a069467b69c7682192c250a03b5c3357cc5f80040a177f80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722274449021513077,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-145541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632974a84e8c8f2d87e51d80d84b674c,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a0ebc6f-2b22-48cf-a031-f0645787bf85 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	99826bc575a85       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   b6f81bcd48db6       hello-world-app-6778b5fc9f-t9gs9
	97c8d95747fbf       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   9ea04a532b4f2       nginx
	8deb3bdf34f37       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   8a766dd437481       headlamp-7867546754-nvghv
	61c8df7c06a7c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   6e4334647505c       busybox
	1473edccf5c58       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   581e355dec702       local-path-provisioner-8d985888d-wkvlg
	7f5c7001264b6       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   887eb3b237ee1       metrics-server-c59844bb4-twcpr
	2638d0f3fe4e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   ae659103a1f4a       storage-provisioner
	28f0536849e4d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   2989fd0144133       coredns-7db6d8ff4d-dfrfm
	db9a7cd1c02e6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   5d4162abe009f       kube-proxy-v6sd2
	954122fb41ccc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        7 minutes ago       Running             kube-controller-manager   0                   f823acb8f6efe       kube-controller-manager-addons-145541
	9a81e90aed143       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   eb74097921283       etcd-addons-145541
	b3b4e3a799006       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        7 minutes ago       Running             kube-apiserver            0                   bcb11c13549f7       kube-apiserver-addons-145541
	905e9468d35c6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        7 minutes ago       Running             kube-scheduler            0                   b5704093a7247       kube-scheduler-addons-145541
	
	
	==> coredns [28f0536849e4d33ef5404799f1da0c33720395f13b1a5e639f3cdc93f8bff5e8] <==
	[INFO] 10.244.0.7:45762 - 47565 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149574s
	[INFO] 10.244.0.7:59178 - 42865 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000120837s
	[INFO] 10.244.0.7:59178 - 9331 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094091s
	[INFO] 10.244.0.7:47171 - 15728 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079648s
	[INFO] 10.244.0.7:47171 - 52595 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070794s
	[INFO] 10.244.0.7:39981 - 20795 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085211s
	[INFO] 10.244.0.7:39981 - 58937 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00009273s
	[INFO] 10.244.0.7:59309 - 36416 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100635s
	[INFO] 10.244.0.7:59309 - 47685 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198172s
	[INFO] 10.244.0.7:57263 - 403 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066491s
	[INFO] 10.244.0.7:57263 - 2193 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049571s
	[INFO] 10.244.0.7:45217 - 4903 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042541s
	[INFO] 10.244.0.7:45217 - 20265 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003672s
	[INFO] 10.244.0.7:44300 - 40164 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000044674s
	[INFO] 10.244.0.7:44300 - 51941 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000051681s
	[INFO] 10.244.0.22:33933 - 860 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000406818s
	[INFO] 10.244.0.22:32793 - 10821 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142403s
	[INFO] 10.244.0.22:49708 - 53449 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00007366s
	[INFO] 10.244.0.22:45962 - 57244 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000065737s
	[INFO] 10.244.0.22:54506 - 20102 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134306s
	[INFO] 10.244.0.22:35808 - 40225 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000503186s
	[INFO] 10.244.0.22:35747 - 43108 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00113837s
	[INFO] 10.244.0.22:60879 - 34246 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00141598s
	[INFO] 10.244.0.27:44390 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000355043s
	[INFO] 10.244.0.27:36489 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091784s
	
	
	==> describe nodes <==
	Name:               addons-145541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-145541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=addons-145541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_34_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-145541
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:34:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-145541
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:41:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:39:19 +0000   Mon, 29 Jul 2024 17:34:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:39:19 +0000   Mon, 29 Jul 2024 17:34:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:39:19 +0000   Mon, 29 Jul 2024 17:34:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:39:19 +0000   Mon, 29 Jul 2024 17:34:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.242
	  Hostname:    addons-145541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ed693bef4bfa479e8fe75e2f6aa79535
	  System UUID:                ed693bef-4bfa-479e-8fe7-5e2f6aa79535
	  Boot ID:                    eee4bf92-69a5-4e92-84eb-0f893b86c8cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  default                     hello-world-app-6778b5fc9f-t9gs9          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  headlamp                    headlamp-7867546754-nvghv                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 coredns-7db6d8ff4d-dfrfm                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m22s
	  kube-system                 etcd-addons-145541                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-apiserver-addons-145541              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 kube-controller-manager-addons-145541     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-proxy-v6sd2                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-scheduler-addons-145541              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 metrics-server-c59844bb4-twcpr            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m17s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  local-path-storage          local-path-provisioner-8d985888d-wkvlg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m42s (x8 over 7m42s)  kubelet          Node addons-145541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s (x8 over 7m42s)  kubelet          Node addons-145541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s (x7 over 7m42s)  kubelet          Node addons-145541 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m36s                  kubelet          Node addons-145541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s                  kubelet          Node addons-145541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s                  kubelet          Node addons-145541 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m35s                  kubelet          Node addons-145541 status is now: NodeReady
	  Normal  RegisteredNode           7m24s                  node-controller  Node addons-145541 event: Registered Node addons-145541 in Controller
	
	
	==> dmesg <==
	[  +0.149328] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.092602] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.073101] kauditd_printk_skb: 136 callbacks suppressed
	[  +8.223655] kauditd_printk_skb: 77 callbacks suppressed
	[ +11.941652] kauditd_printk_skb: 2 callbacks suppressed
	[Jul29 17:35] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.115820] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.718757] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.083402] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.145739] kauditd_printk_skb: 69 callbacks suppressed
	[ +11.339123] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.001058] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.863867] kauditd_printk_skb: 48 callbacks suppressed
	[Jul29 17:36] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.781492] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.054439] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.658235] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.000536] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.972500] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.080847] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.188249] kauditd_printk_skb: 30 callbacks suppressed
	[Jul29 17:37] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.295843] kauditd_printk_skb: 33 callbacks suppressed
	[Jul29 17:38] kauditd_printk_skb: 6 callbacks suppressed
	[Jul29 17:39] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9a81e90aed1439aa9dbccf7dec2273f8264cb1879c7b67c6ac17688632472920] <==
	{"level":"warn","ts":"2024-07-29T17:35:44.614315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.117649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"warn","ts":"2024-07-29T17:35:44.614329Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.10821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T17:35:44.614336Z","caller":"traceutil/trace.go:171","msg":"trace[379648445] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1156; }","duration":"163.1565ms","start":"2024-07-29T17:35:44.451173Z","end":"2024-07-29T17:35:44.61433Z","steps":["trace[379648445] 'agreement among raft nodes before linearized reading'  (duration: 163.062149ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:35:44.614343Z","caller":"traceutil/trace.go:171","msg":"trace[795194796] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1156; }","duration":"164.151582ms","start":"2024-07-29T17:35:44.450187Z","end":"2024-07-29T17:35:44.614339Z","steps":["trace[795194796] 'agreement among raft nodes before linearized reading'  (duration: 164.12639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:35:44.614449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.545007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"warn","ts":"2024-07-29T17:35:44.614462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.097407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-07-29T17:35:44.614464Z","caller":"traceutil/trace.go:171","msg":"trace[1827532928] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1156; }","duration":"363.580715ms","start":"2024-07-29T17:35:44.250878Z","end":"2024-07-29T17:35:44.614459Z","steps":["trace[1827532928] 'agreement among raft nodes before linearized reading'  (duration: 363.522703ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:35:44.614477Z","caller":"traceutil/trace.go:171","msg":"trace[84502644] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1156; }","duration":"244.127169ms","start":"2024-07-29T17:35:44.370344Z","end":"2024-07-29T17:35:44.614471Z","steps":["trace[84502644] 'agreement among raft nodes before linearized reading'  (duration: 244.075364ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:35:44.614478Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:35:44.250866Z","time spent":"363.609178ms","remote":"127.0.0.1:45306","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-07-29T17:35:46.883415Z","caller":"traceutil/trace.go:171","msg":"trace[654725735] transaction","detail":"{read_only:false; response_revision:1164; number_of_response:1; }","duration":"188.696799ms","start":"2024-07-29T17:35:46.694648Z","end":"2024-07-29T17:35:46.883344Z","steps":["trace[654725735] 'process raft request'  (duration: 188.465205ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:36:23.174721Z","caller":"traceutil/trace.go:171","msg":"trace[167230728] transaction","detail":"{read_only:false; response_revision:1441; number_of_response:1; }","duration":"102.262614ms","start":"2024-07-29T17:36:23.072417Z","end":"2024-07-29T17:36:23.17468Z","steps":["trace[167230728] 'process raft request'  (duration: 101.912495ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:36:29.406158Z","caller":"traceutil/trace.go:171","msg":"trace[1434087103] linearizableReadLoop","detail":"{readStateIndex:1572; appliedIndex:1571; }","duration":"193.483273ms","start":"2024-07-29T17:36:29.212639Z","end":"2024-07-29T17:36:29.406122Z","steps":["trace[1434087103] 'read index received'  (duration: 193.333784ms)","trace[1434087103] 'applied index is now lower than readState.Index'  (duration: 148.979µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T17:36:29.406326Z","caller":"traceutil/trace.go:171","msg":"trace[820859752] transaction","detail":"{read_only:false; response_revision:1517; number_of_response:1; }","duration":"204.951568ms","start":"2024-07-29T17:36:29.201354Z","end":"2024-07-29T17:36:29.406306Z","steps":["trace[820859752] 'process raft request'  (duration: 204.658729ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:36:29.406383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.659076ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshots0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T17:36:29.406418Z","caller":"traceutil/trace.go:171","msg":"trace[1250374592] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshots0; response_count:0; response_revision:1517; }","duration":"193.772906ms","start":"2024-07-29T17:36:29.212635Z","end":"2024-07-29T17:36:29.406407Z","steps":["trace[1250374592] 'agreement among raft nodes before linearized reading'  (duration: 193.643581ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:36:30.001208Z","caller":"traceutil/trace.go:171","msg":"trace[1357907197] linearizableReadLoop","detail":"{readStateIndex:1573; appliedIndex:1572; }","duration":"213.716644ms","start":"2024-07-29T17:36:29.787424Z","end":"2024-07-29T17:36:30.001141Z","steps":["trace[1357907197] 'read index received'  (duration: 213.303972ms)","trace[1357907197] 'applied index is now lower than readState.Index'  (duration: 412.126µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T17:36:30.002192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.751593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-07-29T17:36:30.002275Z","caller":"traceutil/trace.go:171","msg":"trace[1059047934] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1518; }","duration":"214.843427ms","start":"2024-07-29T17:36:29.787405Z","end":"2024-07-29T17:36:30.002249Z","steps":["trace[1059047934] 'agreement among raft nodes before linearized reading'  (duration: 214.716163ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:36:30.002537Z","caller":"traceutil/trace.go:171","msg":"trace[721338687] transaction","detail":"{read_only:false; response_revision:1518; number_of_response:1; }","duration":"217.160293ms","start":"2024-07-29T17:36:29.785366Z","end":"2024-07-29T17:36:30.002527Z","steps":["trace[721338687] 'process raft request'  (duration: 215.554662ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T17:36:58.937403Z","caller":"traceutil/trace.go:171","msg":"trace[982322811] linearizableReadLoop","detail":"{readStateIndex:1777; appliedIndex:1776; }","duration":"328.886281ms","start":"2024-07-29T17:36:58.608497Z","end":"2024-07-29T17:36:58.937383Z","steps":["trace[982322811] 'read index received'  (duration: 327.139757ms)","trace[982322811] 'applied index is now lower than readState.Index'  (duration: 1.745793ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T17:36:58.937597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.037039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/task-pv-pod-restore\" ","response":"range_response_count:1 size:2854"}
	{"level":"info","ts":"2024-07-29T17:36:58.93768Z","caller":"traceutil/trace.go:171","msg":"trace[223559889] range","detail":"{range_begin:/registry/pods/default/task-pv-pod-restore; range_end:; response_count:1; response_revision:1713; }","duration":"329.199521ms","start":"2024-07-29T17:36:58.608471Z","end":"2024-07-29T17:36:58.937671Z","steps":["trace[223559889] 'agreement among raft nodes before linearized reading'  (duration: 328.973584ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T17:36:58.937717Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:36:58.608459Z","time spent":"329.243464ms","remote":"127.0.0.1:45310","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":2877,"request content":"key:\"/registry/pods/default/task-pv-pod-restore\" "}
	{"level":"warn","ts":"2024-07-29T17:36:58.93773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.952813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:1594"}
	{"level":"info","ts":"2024-07-29T17:36:58.937832Z","caller":"traceutil/trace.go:171","msg":"trace[811627678] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1713; }","duration":"287.08254ms","start":"2024-07-29T17:36:58.650741Z","end":"2024-07-29T17:36:58.937823Z","steps":["trace[811627678] 'agreement among raft nodes before linearized reading'  (duration: 286.849665ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:41:50 up 8 min,  0 users,  load average: 0.16, 0.80, 0.58
	Linux addons-145541 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b3b4e3a799006df5d6b91d4a312bbf6b40762d89407b95a4d33f84e0e3504b98] <==
	I0729 17:36:04.882970       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 17:36:05.195504       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0729 17:36:24.867102       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.83.134"}
	I0729 17:36:38.371654       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 17:36:38.556320       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.195.224"}
	I0729 17:36:42.918523       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0729 17:36:43.936039       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 17:36:47.323328       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 17:37:12.198760       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 17:37:12.199051       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 17:37:12.221740       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 17:37:12.221798       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 17:37:12.233376       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 17:37:12.233428       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 17:37:12.264479       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 17:37:12.264622       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 17:37:12.291886       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 17:37:12.292025       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0729 17:37:13.222543       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 17:37:13.292804       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 17:37:13.324328       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0729 17:38:59.383541       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.99.193"}
	E0729 17:39:02.048130       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0729 17:39:04.715104       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0729 17:39:04.720649       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [954122fb41ccc0077861429e96b58887533974d1bc428b25958d6a47ceda9b78] <==
	W0729 17:39:53.435793       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:39:53.435868       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:39:55.182067       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:39:55.182100       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:39:56.669634       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:39:56.669720       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:40:11.050921       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:40:11.051046       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:40:34.502543       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:40:34.502602       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:40:35.650040       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:40:35.650091       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:40:37.867249       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:40:37.867304       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:40:55.892854       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:40:55.892924       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:41:13.273420       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:41:13.273573       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:41:28.446130       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:41:28.446228       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:41:33.027147       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:41:33.027195       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 17:41:40.373659       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 17:41:40.373712       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 17:41:49.321888       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="10.792µs"
	
	
	==> kube-proxy [db9a7cd1c02e6cfdb6d03cbb5630580a6a0eafef14ebdbae1a480ee236727e02] <==
	I0729 17:34:29.693388       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:34:29.715076       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.242"]
	I0729 17:34:29.811845       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:34:29.811904       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:34:29.811925       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:34:29.821160       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:34:29.821437       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:34:29.821466       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:34:29.828715       1 config.go:192] "Starting service config controller"
	I0729 17:34:29.828742       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:34:29.828760       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:34:29.828763       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:34:29.829219       1 config.go:319] "Starting node config controller"
	I0729 17:34:29.829225       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:34:29.929075       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 17:34:29.929092       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:34:29.929328       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [905e9468d35c61aa82eeec9e1d23a5f0b7a51d206140d497222588fd6846d273] <==
	W0729 17:34:11.692605       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:34:11.692613       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:34:12.543862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 17:34:12.543913       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 17:34:12.592632       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 17:34:12.592684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 17:34:12.646682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 17:34:12.646722       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 17:34:12.661920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 17:34:12.662015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 17:34:12.693095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 17:34:12.693141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 17:34:12.710887       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:34:12.710973       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:34:12.834281       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 17:34:12.834333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 17:34:12.848157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 17:34:12.848206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 17:34:12.966112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 17:34:12.966159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 17:34:12.987316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 17:34:12.987400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 17:34:12.987462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 17:34:12.987492       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0729 17:34:15.877650       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 17:39:14 addons-145541 kubelet[1278]: E0729 17:39:14.276353    1278 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:39:14 addons-145541 kubelet[1278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:39:14 addons-145541 kubelet[1278]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:39:14 addons-145541 kubelet[1278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:39:14 addons-145541 kubelet[1278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:39:16 addons-145541 kubelet[1278]: I0729 17:39:16.054886    1278 scope.go:117] "RemoveContainer" containerID="4fc9a50e0888f75951fe08761bcc8d8754f969c8eb40fc43d90c627bc6c039df"
	Jul 29 17:39:16 addons-145541 kubelet[1278]: I0729 17:39:16.077880    1278 scope.go:117] "RemoveContainer" containerID="b8ea63b719dad3495b7e61d9549f5f7135fe62ea4e87be089675c856bac8a3bc"
	Jul 29 17:39:53 addons-145541 kubelet[1278]: I0729 17:39:53.251052    1278 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 17:40:14 addons-145541 kubelet[1278]: E0729 17:40:14.276611    1278 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:40:14 addons-145541 kubelet[1278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:40:14 addons-145541 kubelet[1278]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:40:14 addons-145541 kubelet[1278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:40:14 addons-145541 kubelet[1278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:41:14 addons-145541 kubelet[1278]: E0729 17:41:14.277713    1278 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:41:14 addons-145541 kubelet[1278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:41:14 addons-145541 kubelet[1278]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:41:14 addons-145541 kubelet[1278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:41:14 addons-145541 kubelet[1278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:41:20 addons-145541 kubelet[1278]: I0729 17:41:20.251617    1278 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 17:41:50 addons-145541 kubelet[1278]: I0729 17:41:50.722029    1278 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/729a1011-260e-49bc-9fe9-0f5a13a4f5d7-tmp-dir\") pod \"729a1011-260e-49bc-9fe9-0f5a13a4f5d7\" (UID: \"729a1011-260e-49bc-9fe9-0f5a13a4f5d7\") "
	Jul 29 17:41:50 addons-145541 kubelet[1278]: I0729 17:41:50.722069    1278 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj9tf\" (UniqueName: \"kubernetes.io/projected/729a1011-260e-49bc-9fe9-0f5a13a4f5d7-kube-api-access-fj9tf\") pod \"729a1011-260e-49bc-9fe9-0f5a13a4f5d7\" (UID: \"729a1011-260e-49bc-9fe9-0f5a13a4f5d7\") "
	Jul 29 17:41:50 addons-145541 kubelet[1278]: I0729 17:41:50.722535    1278 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/729a1011-260e-49bc-9fe9-0f5a13a4f5d7-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "729a1011-260e-49bc-9fe9-0f5a13a4f5d7" (UID: "729a1011-260e-49bc-9fe9-0f5a13a4f5d7"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 29 17:41:50 addons-145541 kubelet[1278]: I0729 17:41:50.735047    1278 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729a1011-260e-49bc-9fe9-0f5a13a4f5d7-kube-api-access-fj9tf" (OuterVolumeSpecName: "kube-api-access-fj9tf") pod "729a1011-260e-49bc-9fe9-0f5a13a4f5d7" (UID: "729a1011-260e-49bc-9fe9-0f5a13a4f5d7"). InnerVolumeSpecName "kube-api-access-fj9tf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 17:41:50 addons-145541 kubelet[1278]: I0729 17:41:50.822583    1278 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/729a1011-260e-49bc-9fe9-0f5a13a4f5d7-tmp-dir\") on node \"addons-145541\" DevicePath \"\""
	Jul 29 17:41:50 addons-145541 kubelet[1278]: I0729 17:41:50.822635    1278 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fj9tf\" (UniqueName: \"kubernetes.io/projected/729a1011-260e-49bc-9fe9-0f5a13a4f5d7-kube-api-access-fj9tf\") on node \"addons-145541\" DevicePath \"\""
	
	
	==> storage-provisioner [2638d0f3fe4e51cdc6dae5202502b3489349eb03a0433187d89afa4f04258bb0] <==
	I0729 17:34:34.671086       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 17:34:34.699061       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 17:34:34.699134       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 17:34:34.711840       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 17:34:34.712008       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-145541_32f2532d-6af4-413f-ba99-cadabb66aee9!
	I0729 17:34:34.712512       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b9323c94-5488-4f8f-b4e8-f1ec712e35c7", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-145541_32f2532d-6af4-413f-ba99-cadabb66aee9 became leader
	I0729 17:34:34.812533       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-145541_32f2532d-6af4-413f-ba99-cadabb66aee9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-145541 -n addons-145541
helpers_test.go:261: (dbg) Run:  kubectl --context addons-145541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (326.95s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-145541
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-145541: exit status 82 (2m0.447116951s)

                                                
                                                
-- stdout --
	* Stopping node "addons-145541"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-145541" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-145541
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-145541: exit status 11 (21.484224086s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.242:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-145541" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-145541
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-145541: exit status 11 (6.139801486s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.242:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-145541" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-145541
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-145541: exit status 11 (6.14374293s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.242:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-145541" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.22s)

                                                
                                    
x
+
TestCertExpiration (1103.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-974855 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0729 18:43:01.951239   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 18:43:18.902787   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-974855 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m4.317411027s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-974855 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p cert-expiration-974855 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: exit status 109 (14m16.35323101s)

                                                
                                                
-- stdout --
	* [cert-expiration-974855] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "cert-expiration-974855" primary control-plane node in "cert-expiration-974855" cluster
	* Updating the running kvm2 "cert-expiration-974855" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Certificate client.crt has expired. Generating a new one...
	! Certificate apiserver.crt.2f8e6814 has expired. Generating a new one...
	! Certificate proxy-client.crt has expired. Generating a new one...
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.303006ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000198387s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.020303ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000040263s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.020303ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000040263s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-linux-amd64 start -p cert-expiration-974855 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio" : exit status 109
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 19:01:13.798334567 +0000 UTC m=+5283.769038349
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-974855 -n cert-expiration-974855
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-974855 -n cert-expiration-974855: exit status 2 (245.710555ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p cert-expiration-974855 logs -n 25
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-085245 sudo cat                              | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                              |         |         |                     |                     |
	| ssh     | -p calico-085245 sudo                                  | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p calico-085245 sudo                                  | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p calico-085245 sudo                                  | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p calico-085245 sudo cat                              | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p calico-085245 sudo cat                              | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-085245 sudo                                  | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-085245 sudo                                  | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-085245 sudo                                  | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-085245 sudo find                             | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-085245 sudo crio                             | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-085245                                       | calico-085245                | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:50 UTC |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:50 UTC | 29 Jul 24 18:52 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-524369             | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:51 UTC | 29 Jul 24 18:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-524369                                   | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-612270  | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:52 UTC | 29 Jul 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:52 UTC |                     |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-834964        | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-524369                  | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-524369 --memory=2200                     | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612270       | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-834964             | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:55:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:55:39.585743  152077 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:55:39.585990  152077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:55:39.586005  152077 out.go:304] Setting ErrFile to fd 2...
	I0729 18:55:39.586013  152077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:55:39.586221  152077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:55:39.586753  152077 out.go:298] Setting JSON to false
	I0729 18:55:39.587710  152077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13060,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:55:39.587771  152077 start.go:139] virtualization: kvm guest
	I0729 18:55:39.589466  152077 out.go:177] * [old-k8s-version-834964] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:55:39.590918  152077 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:55:39.590970  152077 notify.go:220] Checking for updates...
	I0729 18:55:39.593175  152077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:55:39.594395  152077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:55:39.595489  152077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:55:39.596514  152077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:55:39.597494  152077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:55:39.598986  152077 config.go:182] Loaded profile config "old-k8s-version-834964": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:55:39.599586  152077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:55:39.599662  152077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:55:39.614383  152077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0729 18:55:39.614780  152077 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:55:39.615251  152077 main.go:141] libmachine: Using API Version  1
	I0729 18:55:39.615272  152077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:55:39.615579  152077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:55:39.615785  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:55:39.617440  152077 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 18:55:39.618461  152077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:55:39.618765  152077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:55:39.618806  152077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:55:39.632923  152077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I0729 18:55:39.633257  152077 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:55:39.633631  152077 main.go:141] libmachine: Using API Version  1
	I0729 18:55:39.633650  152077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:55:39.633958  152077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:55:39.634132  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:55:39.667892  152077 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:55:39.669026  152077 start.go:297] selected driver: kvm2
	I0729 18:55:39.669040  152077 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:55:39.669173  152077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:55:39.669961  152077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:55:39.670042  152077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:55:39.684510  152077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:55:39.684981  152077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:55:39.685056  152077 cni.go:84] Creating CNI manager for ""
	I0729 18:55:39.685074  152077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:55:39.685129  152077 start.go:340] cluster config:
	{Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:55:39.685275  152077 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:55:39.687045  152077 out.go:177] * Starting "old-k8s-version-834964" primary control-plane node in "old-k8s-version-834964" cluster
	I0729 18:55:42.153123  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:55:39.688350  152077 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:55:39.688383  152077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:55:39.688393  152077 cache.go:56] Caching tarball of preloaded images
	I0729 18:55:39.688471  152077 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:55:39.688484  152077 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:55:39.688615  152077 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/config.json ...
	I0729 18:55:39.688812  152077 start.go:360] acquireMachinesLock for old-k8s-version-834964: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:55:45.225130  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:55:51.305151  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:55:54.377185  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:00.457087  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:03.529157  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:09.609101  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:12.681129  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:18.761119  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:21.833115  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:27.913170  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:30.985121  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:37.065126  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:40.137175  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:46.217242  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:49.289130  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:55.369136  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:56:58.441204  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:05.673789  139862 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0729 18:57:05.673890  139862 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:57:05.675402  139862 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:57:05.675444  139862 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:57:05.675502  139862 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:57:05.675584  139862 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:57:05.675659  139862 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:57:05.675734  139862 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:57:05.677642  139862 out.go:204]   - Generating certificates and keys ...
	I0729 18:57:05.677720  139862 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:57:05.677770  139862 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:57:05.677832  139862 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:57:05.677910  139862 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:57:05.677980  139862 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:57:05.678022  139862 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:57:05.678098  139862 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:57:05.678164  139862 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:57:05.678223  139862 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:57:05.678286  139862 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:57:05.678315  139862 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:57:05.678365  139862 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:57:05.678404  139862 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:57:05.678464  139862 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:57:05.678506  139862 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:57:05.678555  139862 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:57:05.678599  139862 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:57:05.678662  139862 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:57:05.678716  139862 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:57:05.680218  139862 out.go:204]   - Booting up control plane ...
	I0729 18:57:05.680309  139862 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:57:05.680396  139862 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:57:05.680448  139862 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:57:05.680589  139862 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:57:05.680670  139862 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:57:05.680701  139862 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:57:05.680806  139862 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:57:05.680881  139862 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:57:05.680926  139862 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.303006ms
	I0729 18:57:05.680980  139862 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:57:05.681029  139862 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000198387s
	I0729 18:57:05.681031  139862 kubeadm.go:310] 
	I0729 18:57:05.681075  139862 kubeadm.go:310] Unfortunately, an error has occurred:
	I0729 18:57:05.681104  139862 kubeadm.go:310] 	context deadline exceeded
	I0729 18:57:05.681107  139862 kubeadm.go:310] 
	I0729 18:57:05.681150  139862 kubeadm.go:310] This error is likely caused by:
	I0729 18:57:05.681177  139862 kubeadm.go:310] 	- The kubelet is not running
	I0729 18:57:05.681293  139862 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:57:05.681303  139862 kubeadm.go:310] 
	I0729 18:57:05.681421  139862 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:57:05.681447  139862 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0729 18:57:05.681471  139862 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0729 18:57:05.681474  139862 kubeadm.go:310] 
	I0729 18:57:05.681613  139862 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:57:05.681684  139862 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:57:05.681771  139862 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0729 18:57:05.681902  139862 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:57:05.681983  139862 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0729 18:57:05.682077  139862 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	W0729 18:57:05.682211  139862 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.303006ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000198387s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:57:05.682299  139862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:57:04.521162  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:07.593216  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:10.767665  139862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.085343295s)
	I0729 18:57:10.767756  139862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:57:10.783540  139862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:57:10.793939  139862 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:57:10.793951  139862 kubeadm.go:157] found existing configuration files:
	
	I0729 18:57:10.794002  139862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:57:10.803415  139862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:57:10.803475  139862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:57:10.813512  139862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:57:10.823031  139862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:57:10.823089  139862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:57:10.832817  139862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:57:10.842199  139862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:57:10.842255  139862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:57:10.852326  139862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:57:10.861885  139862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:57:10.861928  139862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:57:10.871848  139862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:57:11.064626  139862 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:57:13.673174  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:16.745197  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:22.825145  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:25.897230  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:31.977153  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:35.049197  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:41.129140  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:44.201109  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:50.281160  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:53.353137  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:57:59.433122  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:02.505133  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:08.585115  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:11.657086  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:17.737179  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:20.809195  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:26.889156  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:29.961111  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:36.041132  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:39.113193  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:45.193148  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:48.265092  151436 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.7:22: connect: no route to host
	I0729 18:58:51.269298  151772 start.go:364] duration metric: took 3m49.578087746s to acquireMachinesLock for "default-k8s-diff-port-612270"
	I0729 18:58:51.269354  151772 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:58:51.269364  151772 fix.go:54] fixHost starting: 
	I0729 18:58:51.269726  151772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:58:51.269762  151772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:58:51.285391  151772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0729 18:58:51.285896  151772 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:58:51.286355  151772 main.go:141] libmachine: Using API Version  1
	I0729 18:58:51.286377  151772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:58:51.286738  151772 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:58:51.286970  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .DriverName
	I0729 18:58:51.287128  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetState
	I0729 18:58:51.288720  151772 fix.go:112] recreateIfNeeded on default-k8s-diff-port-612270: state=Stopped err=<nil>
	I0729 18:58:51.288742  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .DriverName
	W0729 18:58:51.288888  151772 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:58:51.291164  151772 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-612270" ...
	I0729 18:58:51.292414  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .Start
	I0729 18:58:51.292586  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Ensuring networks are active...
	I0729 18:58:51.293381  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Ensuring network default is active
	I0729 18:58:51.293779  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Ensuring network mk-default-k8s-diff-port-612270 is active
	I0729 18:58:51.294173  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Getting domain xml...
	I0729 18:58:51.294922  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Creating domain...
	I0729 18:58:51.266658  151436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:58:51.266698  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetMachineName
	I0729 18:58:51.267055  151436 buildroot.go:166] provisioning hostname "no-preload-524369"
	I0729 18:58:51.267083  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetMachineName
	I0729 18:58:51.267295  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:58:51.269160  151436 machine.go:97] duration metric: took 4m37.430763381s to provisionDockerMachine
	I0729 18:58:51.269200  151436 fix.go:56] duration metric: took 4m37.45266945s for fixHost
	I0729 18:58:51.269206  151436 start.go:83] releasing machines lock for "no-preload-524369", held for 4m37.45269794s
	W0729 18:58:51.269242  151436 start.go:714] error starting host: provision: host is not running
	W0729 18:58:51.269353  151436 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 18:58:51.269366  151436 start.go:729] Will try again in 5 seconds ...
	I0729 18:58:51.615205  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting to get IP...
	I0729 18:58:51.616118  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:58:51.616574  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:58:51.616641  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:58:51.616546  152778 retry.go:31] will retry after 237.739071ms: waiting for machine to come up
	I0729 18:58:51.855968  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:58:51.856518  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:58:51.856546  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:58:51.856481  152778 retry.go:31] will retry after 264.938113ms: waiting for machine to come up
	I0729 18:58:52.123043  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:58:52.123680  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:58:52.123709  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:58:52.123613  152778 retry.go:31] will retry after 320.594448ms: waiting for machine to come up
	I0729 18:58:52.446258  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:58:52.446777  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:58:52.446807  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:58:52.446737  152778 retry.go:31] will retry after 367.741592ms: waiting for machine to come up
	I0729 18:58:52.816195  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:58:52.816695  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:58:52.816744  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:58:52.816634  152778 retry.go:31] will retry after 755.4057ms: waiting for machine to come up
	I0729 18:58:53.573565  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:58:53.574149  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:58:53.574175  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:58:53.574072  152778 retry.go:31] will retry after 872.735406ms: waiting for machine to come up
	I0729 18:58:54.448072  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:58:54.448570  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:58:54.448626  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:58:54.448517  152778 retry.go:31] will retry after 923.965956ms: waiting for machine to come up
	I0729 18:58:55.373549  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:58:55.374026  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:58:55.374050  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:58:55.373979  152778 retry.go:31] will retry after 1.475361718s: waiting for machine to come up
	I0729 18:58:56.271044  151436 start.go:360] acquireMachinesLock for no-preload-524369: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:58:56.850949  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:58:56.851468  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:58:56.851495  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:58:56.851424  152778 retry.go:31] will retry after 1.537187967s: waiting for machine to come up
	I0729 18:58:58.391280  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:58:58.391773  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:58:58.391794  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:58:58.391730  152778 retry.go:31] will retry after 2.054089846s: waiting for machine to come up
	I0729 18:59:00.448904  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:00.449506  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:59:00.449537  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:59:00.449415  152778 retry.go:31] will retry after 2.180259499s: waiting for machine to come up
	I0729 18:59:02.631556  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:02.631965  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:59:02.632024  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:59:02.631947  152778 retry.go:31] will retry after 2.332063198s: waiting for machine to come up
	I0729 18:59:04.967508  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:04.968023  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | unable to find current IP address of domain default-k8s-diff-port-612270 in network mk-default-k8s-diff-port-612270
	I0729 18:59:04.968052  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | I0729 18:59:04.967968  152778 retry.go:31] will retry after 4.487572939s: waiting for machine to come up
	I0729 18:59:10.725424  152077 start.go:364] duration metric: took 3m31.036575503s to acquireMachinesLock for "old-k8s-version-834964"
	I0729 18:59:10.725504  152077 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:59:10.725513  152077 fix.go:54] fixHost starting: 
	I0729 18:59:10.726151  152077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:59:10.726198  152077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:59:10.742782  152077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0729 18:59:10.743229  152077 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:59:10.743775  152077 main.go:141] libmachine: Using API Version  1
	I0729 18:59:10.743810  152077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:59:10.744116  152077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:59:10.744309  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:10.744484  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetState
	I0729 18:59:10.745829  152077 fix.go:112] recreateIfNeeded on old-k8s-version-834964: state=Stopped err=<nil>
	I0729 18:59:10.745859  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	W0729 18:59:10.746000  152077 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:59:10.748309  152077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-834964" ...
	I0729 18:59:09.459346  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.459767  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Found IP for machine: 192.168.39.152
	I0729 18:59:09.459789  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has current primary IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.459802  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Reserving static IP address...
	I0729 18:59:09.460220  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-612270", mac: "52:54:00:e1:29:74", ip: "192.168.39.152"} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:09.460239  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Reserved static IP address: 192.168.39.152
	I0729 18:59:09.460256  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | skip adding static IP to network mk-default-k8s-diff-port-612270 - found existing host DHCP lease matching {name: "default-k8s-diff-port-612270", mac: "52:54:00:e1:29:74", ip: "192.168.39.152"}
	I0729 18:59:09.460268  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | Getting to WaitForSSH function...
	I0729 18:59:09.460284  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for SSH to be available...
	I0729 18:59:09.462322  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.462625  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:09.462652  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.462713  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | Using SSH client type: external
	I0729 18:59:09.462744  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/default-k8s-diff-port-612270/id_rsa (-rw-------)
	I0729 18:59:09.462794  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/default-k8s-diff-port-612270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:59:09.462813  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | About to run SSH command:
	I0729 18:59:09.462833  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | exit 0
	I0729 18:59:09.584815  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | SSH cmd err, output: <nil>: 
	I0729 18:59:09.585227  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetConfigRaw
	I0729 18:59:09.585883  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetIP
	I0729 18:59:09.588595  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.588984  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:09.589021  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.589297  151772 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/config.json ...
	I0729 18:59:09.589473  151772 machine.go:94] provisionDockerMachine start ...
	I0729 18:59:09.589491  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .DriverName
	I0729 18:59:09.589745  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:59:09.591995  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.592332  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:09.592370  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.592517  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHPort
	I0729 18:59:09.592673  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:09.592807  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:09.592934  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHUsername
	I0729 18:59:09.593072  151772 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:09.593387  151772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0729 18:59:09.593404  151772 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:59:09.692984  151772 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:59:09.693014  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetMachineName
	I0729 18:59:09.693249  151772 buildroot.go:166] provisioning hostname "default-k8s-diff-port-612270"
	I0729 18:59:09.693281  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetMachineName
	I0729 18:59:09.693520  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:59:09.696083  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.696461  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:09.696495  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.696651  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHPort
	I0729 18:59:09.696830  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:09.697013  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:09.697187  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHUsername
	I0729 18:59:09.697345  151772 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:09.697503  151772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0729 18:59:09.697518  151772 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-612270 && echo "default-k8s-diff-port-612270" | sudo tee /etc/hostname
	I0729 18:59:09.810842  151772 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-612270
	
	I0729 18:59:09.810868  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:59:09.813960  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.814302  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:09.814328  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.814478  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHPort
	I0729 18:59:09.814659  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:09.814836  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:09.814956  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHUsername
	I0729 18:59:09.815138  151772 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:09.815321  151772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0729 18:59:09.815338  151772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-612270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-612270/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-612270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:59:09.925373  151772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:59:09.925399  151772 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 18:59:09.925424  151772 buildroot.go:174] setting up certificates
	I0729 18:59:09.925434  151772 provision.go:84] configureAuth start
	I0729 18:59:09.925442  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetMachineName
	I0729 18:59:09.925713  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetIP
	I0729 18:59:09.928482  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.928819  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:09.928850  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.928996  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:59:09.931176  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.931504  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:09.931535  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:09.931624  151772 provision.go:143] copyHostCerts
	I0729 18:59:09.931680  151772 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 18:59:09.931691  151772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:59:09.931753  151772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 18:59:09.931847  151772 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 18:59:09.931855  151772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:59:09.931878  151772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 18:59:09.931930  151772 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 18:59:09.931936  151772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:59:09.931957  151772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 18:59:09.932007  151772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-612270 san=[127.0.0.1 192.168.39.152 default-k8s-diff-port-612270 localhost minikube]
	I0729 18:59:10.079590  151772 provision.go:177] copyRemoteCerts
	I0729 18:59:10.079657  151772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:59:10.079683  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:59:10.082373  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.082754  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:10.082787  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.082953  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHPort
	I0729 18:59:10.083141  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:10.083306  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHUsername
	I0729 18:59:10.083465  151772 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/default-k8s-diff-port-612270/id_rsa Username:docker}
	I0729 18:59:10.162700  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:59:10.186121  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 18:59:10.209009  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:59:10.231605  151772 provision.go:87] duration metric: took 306.158069ms to configureAuth
	I0729 18:59:10.231632  151772 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:59:10.231797  151772 config.go:182] Loaded profile config "default-k8s-diff-port-612270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:59:10.231883  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:59:10.234529  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.234902  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:10.234921  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.235126  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHPort
	I0729 18:59:10.235330  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:10.235490  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:10.235638  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHUsername
	I0729 18:59:10.235832  151772 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:10.236048  151772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0729 18:59:10.236069  151772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:59:10.494040  151772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:59:10.494071  151772 machine.go:97] duration metric: took 904.584735ms to provisionDockerMachine
	I0729 18:59:10.494086  151772 start.go:293] postStartSetup for "default-k8s-diff-port-612270" (driver="kvm2")
	I0729 18:59:10.494099  151772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:59:10.494119  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .DriverName
	I0729 18:59:10.494657  151772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:59:10.494694  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:59:10.497110  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.497554  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:10.497578  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.497764  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHPort
	I0729 18:59:10.497936  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:10.498106  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHUsername
	I0729 18:59:10.498237  151772 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/default-k8s-diff-port-612270/id_rsa Username:docker}
	I0729 18:59:10.579646  151772 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:59:10.584079  151772 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:59:10.584113  151772 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:59:10.584178  151772 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:59:10.584263  151772 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:59:10.584363  151772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:59:10.593778  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:59:10.620911  151772 start.go:296] duration metric: took 126.807142ms for postStartSetup
	I0729 18:59:10.620963  151772 fix.go:56] duration metric: took 19.351599114s for fixHost
	I0729 18:59:10.620985  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:59:10.623691  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.624051  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:10.624082  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.624227  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHPort
	I0729 18:59:10.624445  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:10.624648  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:10.624835  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHUsername
	I0729 18:59:10.624980  151772 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:10.625141  151772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0729 18:59:10.625151  151772 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:59:10.725275  151772 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722279550.693124816
	
	I0729 18:59:10.725299  151772 fix.go:216] guest clock: 1722279550.693124816
	I0729 18:59:10.725306  151772 fix.go:229] Guest: 2024-07-29 18:59:10.693124816 +0000 UTC Remote: 2024-07-29 18:59:10.620967015 +0000 UTC m=+249.068601061 (delta=72.157801ms)
	I0729 18:59:10.725332  151772 fix.go:200] guest clock delta is within tolerance: 72.157801ms
	I0729 18:59:10.725337  151772 start.go:83] releasing machines lock for "default-k8s-diff-port-612270", held for 19.456006983s
	I0729 18:59:10.725366  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .DriverName
	I0729 18:59:10.725656  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetIP
	I0729 18:59:10.728082  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.728441  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:10.728468  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.728651  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .DriverName
	I0729 18:59:10.729222  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .DriverName
	I0729 18:59:10.729427  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .DriverName
	I0729 18:59:10.729493  151772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:59:10.729553  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:59:10.729704  151772 ssh_runner.go:195] Run: cat /version.json
	I0729 18:59:10.729726  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:59:10.732429  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.732626  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.732783  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:10.732807  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.732934  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHPort
	I0729 18:59:10.733052  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:10.733071  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:10.733104  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:10.733211  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHPort
	I0729 18:59:10.733292  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHUsername
	I0729 18:59:10.733355  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:59:10.733413  151772 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/default-k8s-diff-port-612270/id_rsa Username:docker}
	I0729 18:59:10.733474  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHUsername
	I0729 18:59:10.733640  151772 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/default-k8s-diff-port-612270/id_rsa Username:docker}
	I0729 18:59:10.810774  151772 ssh_runner.go:195] Run: systemctl --version
	I0729 18:59:10.836710  151772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:59:10.990653  151772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:59:10.996925  151772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:59:10.996997  151772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:59:11.015182  151772 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:59:11.015220  151772 start.go:495] detecting cgroup driver to use...
	I0729 18:59:11.015293  151772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:59:11.038835  151772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:59:11.054338  151772 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:59:11.054404  151772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:59:11.068826  151772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:59:11.083295  151772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:59:11.203622  151772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:59:11.364850  151772 docker.go:233] disabling docker service ...
	I0729 18:59:11.364962  151772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:59:11.385039  151772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:59:11.399025  151772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:59:11.524688  151772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:59:11.650039  151772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:59:11.664608  151772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:59:11.684098  151772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:59:11.684167  151772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:11.695358  151772 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:59:11.695423  151772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:11.706281  151772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:11.717160  151772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:11.727666  151772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:59:11.740052  151772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:11.750998  151772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:11.768427  151772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:11.779955  151772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:59:11.789766  151772 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:59:11.789827  151772 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:59:11.804800  151772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:59:11.815534  151772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:59:11.936963  151772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:59:12.073196  151772 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:59:12.073269  151772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:59:12.078307  151772 start.go:563] Will wait 60s for crictl version
	I0729 18:59:12.078371  151772 ssh_runner.go:195] Run: which crictl
	I0729 18:59:12.082380  151772 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:59:12.127818  151772 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:59:12.127932  151772 ssh_runner.go:195] Run: crio --version
	I0729 18:59:12.157837  151772 ssh_runner.go:195] Run: crio --version
	I0729 18:59:12.188274  151772 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:59:10.749572  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .Start
	I0729 18:59:10.749851  152077 main.go:141] libmachine: (old-k8s-version-834964) Ensuring networks are active...
	I0729 18:59:10.750619  152077 main.go:141] libmachine: (old-k8s-version-834964) Ensuring network default is active
	I0729 18:59:10.750954  152077 main.go:141] libmachine: (old-k8s-version-834964) Ensuring network mk-old-k8s-version-834964 is active
	I0729 18:59:10.751344  152077 main.go:141] libmachine: (old-k8s-version-834964) Getting domain xml...
	I0729 18:59:10.752108  152077 main.go:141] libmachine: (old-k8s-version-834964) Creating domain...
	I0729 18:59:11.103179  152077 main.go:141] libmachine: (old-k8s-version-834964) Waiting to get IP...
	I0729 18:59:11.104133  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:11.104682  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:11.104757  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:11.104644  152890 retry.go:31] will retry after 259.266842ms: waiting for machine to come up
	I0729 18:59:11.365299  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:11.365916  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:11.365943  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:11.365862  152890 retry.go:31] will retry after 274.029734ms: waiting for machine to come up
	I0729 18:59:11.641428  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:11.641885  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:11.641910  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:11.641824  152890 retry.go:31] will retry after 363.716855ms: waiting for machine to come up
	I0729 18:59:12.007550  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:12.008200  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:12.008226  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:12.008158  152890 retry.go:31] will retry after 537.4279ms: waiting for machine to come up
	I0729 18:59:12.546892  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:12.547573  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:12.547605  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:12.547529  152890 retry.go:31] will retry after 756.011995ms: waiting for machine to come up
	I0729 18:59:13.305557  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:13.306344  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:13.306382  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:13.306295  152890 retry.go:31] will retry after 949.340755ms: waiting for machine to come up
	I0729 18:59:14.257589  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:14.258115  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:14.258148  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:14.258059  152890 retry.go:31] will retry after 1.148418352s: waiting for machine to come up
	I0729 18:59:12.189412  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetIP
	I0729 18:59:12.192214  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:12.192641  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:59:12.192685  151772 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:59:12.192845  151772 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:59:12.198079  151772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:59:12.214620  151772 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-612270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-612270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:59:12.214771  151772 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:59:12.214832  151772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:59:12.256007  151772 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:59:12.256082  151772 ssh_runner.go:195] Run: which lz4
	I0729 18:59:12.261193  151772 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:59:12.266685  151772 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:59:12.266708  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:59:13.712792  151772 crio.go:462] duration metric: took 1.451630388s to copy over tarball
	I0729 18:59:13.712880  151772 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:59:15.910592  151772 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197671844s)
	I0729 18:59:15.910632  151772 crio.go:469] duration metric: took 2.197808025s to extract the tarball
	I0729 18:59:15.910640  151772 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:59:15.948220  151772 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:59:15.992795  151772 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:59:15.992820  151772 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:59:15.992828  151772 kubeadm.go:934] updating node { 192.168.39.152 8444 v1.30.3 crio true true} ...
	I0729 18:59:15.992984  151772 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-612270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-612270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:59:15.993067  151772 ssh_runner.go:195] Run: crio config
	I0729 18:59:16.035680  151772 cni.go:84] Creating CNI manager for ""
	I0729 18:59:16.035706  151772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:59:16.035734  151772 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:59:16.035756  151772 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-612270 NodeName:default-k8s-diff-port-612270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:59:16.035893  151772 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-612270"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:59:16.035960  151772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:59:16.046337  151772 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:59:16.046416  151772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:59:16.056614  151772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 18:59:16.073363  151772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:59:16.089436  151772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 18:59:16.105903  151772 ssh_runner.go:195] Run: grep 192.168.39.152	control-plane.minikube.internal$ /etc/hosts
	I0729 18:59:16.109655  151772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:59:16.121604  151772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:59:16.228351  151772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:59:16.244496  151772 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270 for IP: 192.168.39.152
	I0729 18:59:16.244541  151772 certs.go:194] generating shared ca certs ...
	I0729 18:59:16.244564  151772 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:59:16.244758  151772 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 18:59:16.244829  151772 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 18:59:16.244842  151772 certs.go:256] generating profile certs ...
	I0729 18:59:16.245015  151772 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/client.key
	I0729 18:59:16.245104  151772 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/apiserver.key.931ba7c9
	I0729 18:59:16.245158  151772 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/proxy-client.key
	I0729 18:59:16.245318  151772 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 18:59:16.245357  151772 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 18:59:16.245369  151772 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:59:16.245404  151772 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:59:16.245453  151772 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:59:16.245482  151772 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 18:59:16.245531  151772 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:59:16.246367  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:59:16.283496  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:59:16.320533  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:59:16.347767  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:59:16.374102  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:59:16.397810  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:59:16.424836  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:59:16.451561  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:59:16.474127  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 18:59:16.496556  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:59:16.518917  151772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 18:59:16.541296  151772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:59:16.558135  151772 ssh_runner.go:195] Run: openssl version
	I0729 18:59:16.563981  151772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 18:59:16.574881  151772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 18:59:16.579148  151772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:59:16.579195  151772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 18:59:16.584872  151772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 18:59:15.408710  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:15.409421  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:15.409444  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:15.409376  152890 retry.go:31] will retry after 1.205038454s: waiting for machine to come up
	I0729 18:59:16.615884  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:16.616362  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:16.616388  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:16.616324  152890 retry.go:31] will retry after 1.590208101s: waiting for machine to come up
	I0729 18:59:18.209022  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:18.209539  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:18.209566  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:18.209487  152890 retry.go:31] will retry after 2.104289607s: waiting for machine to come up
	I0729 18:59:16.595959  151772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 18:59:16.606799  151772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 18:59:16.611158  151772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:59:16.611209  151772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 18:59:16.616916  151772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:59:16.629198  151772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:59:16.639613  151772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:59:16.644044  151772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:59:16.644091  151772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:59:16.649300  151772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:59:16.659966  151772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:59:16.664218  151772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:59:16.670136  151772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:59:16.676098  151772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:59:16.682170  151772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:59:16.687984  151772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:59:16.693895  151772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:59:16.699538  151772 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-612270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-612270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:59:16.699645  151772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:59:16.699695  151772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:59:16.738284  151772 cri.go:89] found id: ""
	I0729 18:59:16.738362  151772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:59:16.751477  151772 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:59:16.751506  151772 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:59:16.751563  151772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:59:16.761490  151772 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:59:16.762267  151772 kubeconfig.go:125] found "default-k8s-diff-port-612270" server: "https://192.168.39.152:8444"
	I0729 18:59:16.763971  151772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:59:16.773439  151772 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.152
	I0729 18:59:16.773482  151772 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:59:16.773496  151772 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:59:16.773557  151772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:59:16.811469  151772 cri.go:89] found id: ""
	I0729 18:59:16.811539  151772 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:59:16.827893  151772 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:59:16.837393  151772 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:59:16.837414  151772 kubeadm.go:157] found existing configuration files:
	
	I0729 18:59:16.837469  151772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 18:59:16.846836  151772 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:59:16.846899  151772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:59:16.856409  151772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 18:59:16.865317  151772 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:59:16.865367  151772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:59:16.874658  151772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 18:59:16.883464  151772 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:59:16.883524  151772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:59:16.892607  151772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 18:59:16.901599  151772 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:59:16.901671  151772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:59:16.911118  151772 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:59:16.920809  151772 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:17.042414  151772 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:18.174547  151772 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.132095415s)
	I0729 18:59:18.174573  151772 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:18.402932  151772 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:18.468966  151772 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:18.565046  151772 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:59:18.565140  151772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:19.065324  151772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:19.566014  151772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:19.585159  151772 api_server.go:72] duration metric: took 1.020112402s to wait for apiserver process to appear ...
	I0729 18:59:19.585245  151772 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:59:19.585285  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:19.585862  151772 api_server.go:269] stopped: https://192.168.39.152:8444/healthz: Get "https://192.168.39.152:8444/healthz": dial tcp 192.168.39.152:8444: connect: connection refused
	I0729 18:59:20.086061  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:20.315121  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:20.315731  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:20.315801  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:20.315678  152890 retry.go:31] will retry after 1.989233363s: waiting for machine to come up
	I0729 18:59:22.307337  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:22.307892  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:22.307923  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:22.307834  152890 retry.go:31] will retry after 3.487502857s: waiting for machine to come up
	I0729 18:59:21.983256  151772 api_server.go:279] https://192.168.39.152:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:59:21.983295  151772 api_server.go:103] status: https://192.168.39.152:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:59:21.983336  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:22.035940  151772 api_server.go:279] https://192.168.39.152:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:59:22.035984  151772 api_server.go:103] status: https://192.168.39.152:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:59:22.086248  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:22.096653  151772 api_server.go:279] https://192.168.39.152:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:59:22.096705  151772 api_server.go:103] status: https://192.168.39.152:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:59:22.586028  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:22.591482  151772 api_server.go:279] https://192.168.39.152:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:59:22.591514  151772 api_server.go:103] status: https://192.168.39.152:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:59:23.086285  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:23.098445  151772 api_server.go:279] https://192.168.39.152:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:59:23.098474  151772 api_server.go:103] status: https://192.168.39.152:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:59:23.586106  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:23.590402  151772 api_server.go:279] https://192.168.39.152:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:59:23.590432  151772 api_server.go:103] status: https://192.168.39.152:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:59:24.086028  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:24.090320  151772 api_server.go:279] https://192.168.39.152:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:59:24.090343  151772 api_server.go:103] status: https://192.168.39.152:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:59:24.585980  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:24.590308  151772 api_server.go:279] https://192.168.39.152:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:59:24.590334  151772 api_server.go:103] status: https://192.168.39.152:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:59:25.085673  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:25.090033  151772 api_server.go:279] https://192.168.39.152:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:59:25.090063  151772 api_server.go:103] status: https://192.168.39.152:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:59:25.585641  151772 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8444/healthz ...
	I0729 18:59:25.589643  151772 api_server.go:279] https://192.168.39.152:8444/healthz returned 200:
	ok
	I0729 18:59:25.595833  151772 api_server.go:141] control plane version: v1.30.3
	I0729 18:59:25.595863  151772 api_server.go:131] duration metric: took 6.010592117s to wait for apiserver health ...
	I0729 18:59:25.595874  151772 cni.go:84] Creating CNI manager for ""
	I0729 18:59:25.595880  151772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:59:25.597632  151772 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:59:25.598851  151772 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:59:25.609559  151772 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:59:25.627429  151772 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:59:25.636309  151772 system_pods.go:59] 8 kube-system pods found
	I0729 18:59:25.636338  151772 system_pods.go:61] "coredns-7db6d8ff4d-92bn4" [4b96665d-0997-47f3-acf4-be587f87b3f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:59:25.636345  151772 system_pods.go:61] "etcd-default-k8s-diff-port-612270" [649de2bd-87af-4f18-a926-c9b3805e5486] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:59:25.636352  151772 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-612270" [7fdf8fe1-7c03-403f-8177-e5f341dac50c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:59:25.636358  151772 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-612270" [f9216ab3-a55e-43af-8ffe-5a4f8abbfdf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:59:25.636366  151772 system_pods.go:61] "kube-proxy-fqfvp" [d97fa5e7-2f16-4baf-8ca5-785ef800c05c] Running
	I0729 18:59:25.636370  151772 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-612270" [4b5a1cfd-7f43-485a-997a-e2b56cd781fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:59:25.636375  151772 system_pods.go:61] "metrics-server-569cc877fc-shvq4" [e057a5f7-b590-4b55-bd51-c05971adb33e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 18:59:25.636381  151772 system_pods.go:61] "storage-provisioner" [105e1fa1-2e58-4f51-b999-4e93a838bab0] Running
	I0729 18:59:25.636387  151772 system_pods.go:74] duration metric: took 8.940432ms to wait for pod list to return data ...
	I0729 18:59:25.636394  151772 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:59:25.640314  151772 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:59:25.640337  151772 node_conditions.go:123] node cpu capacity is 2
	I0729 18:59:25.640356  151772 node_conditions.go:105] duration metric: took 3.958333ms to run NodePressure ...
	I0729 18:59:25.640372  151772 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:25.910565  151772 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:59:25.914854  151772 kubeadm.go:739] kubelet initialised
	I0729 18:59:25.914875  151772 kubeadm.go:740] duration metric: took 4.282318ms waiting for restarted kubelet to initialise ...
	I0729 18:59:25.914884  151772 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:59:25.919489  151772 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-92bn4" in "kube-system" namespace to be "Ready" ...
	I0729 18:59:25.923855  151772 pod_ready.go:97] node "default-k8s-diff-port-612270" hosting pod "coredns-7db6d8ff4d-92bn4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612270" has status "Ready":"False"
	I0729 18:59:25.923875  151772 pod_ready.go:81] duration metric: took 4.363089ms for pod "coredns-7db6d8ff4d-92bn4" in "kube-system" namespace to be "Ready" ...
	E0729 18:59:25.923883  151772 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612270" hosting pod "coredns-7db6d8ff4d-92bn4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612270" has status "Ready":"False"
	I0729 18:59:25.923888  151772 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-612270" in "kube-system" namespace to be "Ready" ...
	I0729 18:59:25.928068  151772 pod_ready.go:97] node "default-k8s-diff-port-612270" hosting pod "etcd-default-k8s-diff-port-612270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612270" has status "Ready":"False"
	I0729 18:59:25.928092  151772 pod_ready.go:81] duration metric: took 4.196533ms for pod "etcd-default-k8s-diff-port-612270" in "kube-system" namespace to be "Ready" ...
	E0729 18:59:25.928103  151772 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612270" hosting pod "etcd-default-k8s-diff-port-612270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612270" has status "Ready":"False"
	I0729 18:59:25.928109  151772 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-612270" in "kube-system" namespace to be "Ready" ...
	I0729 18:59:25.931965  151772 pod_ready.go:97] node "default-k8s-diff-port-612270" hosting pod "kube-apiserver-default-k8s-diff-port-612270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612270" has status "Ready":"False"
	I0729 18:59:25.931991  151772 pod_ready.go:81] duration metric: took 3.875767ms for pod "kube-apiserver-default-k8s-diff-port-612270" in "kube-system" namespace to be "Ready" ...
	E0729 18:59:25.932000  151772 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612270" hosting pod "kube-apiserver-default-k8s-diff-port-612270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612270" has status "Ready":"False"
	I0729 18:59:25.932009  151772 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-612270" in "kube-system" namespace to be "Ready" ...
	I0729 18:59:26.031090  151772 pod_ready.go:97] node "default-k8s-diff-port-612270" hosting pod "kube-controller-manager-default-k8s-diff-port-612270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612270" has status "Ready":"False"
	I0729 18:59:26.031126  151772 pod_ready.go:81] duration metric: took 99.110264ms for pod "kube-controller-manager-default-k8s-diff-port-612270" in "kube-system" namespace to be "Ready" ...
	E0729 18:59:26.031138  151772 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-612270" hosting pod "kube-controller-manager-default-k8s-diff-port-612270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-612270" has status "Ready":"False"
	I0729 18:59:26.031145  151772 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fqfvp" in "kube-system" namespace to be "Ready" ...
	I0729 18:59:26.431632  151772 pod_ready.go:92] pod "kube-proxy-fqfvp" in "kube-system" namespace has status "Ready":"True"
	I0729 18:59:26.431661  151772 pod_ready.go:81] duration metric: took 400.508827ms for pod "kube-proxy-fqfvp" in "kube-system" namespace to be "Ready" ...
	I0729 18:59:26.431682  151772 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-612270" in "kube-system" namespace to be "Ready" ...
	I0729 18:59:25.797201  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:25.797736  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:25.797780  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:25.797650  152890 retry.go:31] will retry after 3.345863727s: waiting for machine to come up
	I0729 18:59:29.147040  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.147581  152077 main.go:141] libmachine: (old-k8s-version-834964) Found IP for machine: 192.168.61.89
	I0729 18:59:29.147605  152077 main.go:141] libmachine: (old-k8s-version-834964) Reserving static IP address...
	I0729 18:59:29.147620  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has current primary IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.147994  152077 main.go:141] libmachine: (old-k8s-version-834964) Reserved static IP address: 192.168.61.89
	I0729 18:59:29.148031  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "old-k8s-version-834964", mac: "52:54:00:60:d4:59", ip: "192.168.61.89"} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.148049  152077 main.go:141] libmachine: (old-k8s-version-834964) Waiting for SSH to be available...
	I0729 18:59:29.148090  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | skip adding static IP to network mk-old-k8s-version-834964 - found existing host DHCP lease matching {name: "old-k8s-version-834964", mac: "52:54:00:60:d4:59", ip: "192.168.61.89"}
	I0729 18:59:29.148105  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | Getting to WaitForSSH function...
	I0729 18:59:29.150384  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.150778  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.150806  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.150940  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | Using SSH client type: external
	I0729 18:59:29.150987  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa (-rw-------)
	I0729 18:59:29.151026  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:59:29.151043  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | About to run SSH command:
	I0729 18:59:29.151056  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | exit 0
	I0729 18:59:29.272649  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | SSH cmd err, output: <nil>: 
	I0729 18:59:29.273065  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetConfigRaw
	I0729 18:59:29.273787  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:59:29.276070  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.276427  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.276450  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.276734  152077 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/config.json ...
	I0729 18:59:29.276954  152077 machine.go:94] provisionDockerMachine start ...
	I0729 18:59:29.276973  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:29.277164  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.279157  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.279493  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.279518  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.279679  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:29.279845  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.279977  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.280130  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:29.280282  152077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:29.280469  152077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:59:29.280481  152077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:59:29.376976  152077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:59:29.377010  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetMachineName
	I0729 18:59:29.377308  152077 buildroot.go:166] provisioning hostname "old-k8s-version-834964"
	I0729 18:59:29.377334  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetMachineName
	I0729 18:59:29.377543  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.380045  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.380366  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.380395  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.380510  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:29.380668  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.380782  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.380919  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:29.381098  152077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:29.381267  152077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:59:29.381283  152077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-834964 && echo "old-k8s-version-834964" | sudo tee /etc/hostname
	I0729 18:59:29.495056  152077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-834964
	
	I0729 18:59:29.495080  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.497946  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.498325  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.498357  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.498560  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:29.498766  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.498930  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.499047  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:29.499173  152077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:29.499353  152077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:59:29.499371  152077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-834964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-834964/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-834964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:59:30.342254  151436 start.go:364] duration metric: took 34.071112662s to acquireMachinesLock for "no-preload-524369"
	I0729 18:59:30.342336  151436 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:59:30.342379  151436 fix.go:54] fixHost starting: 
	I0729 18:59:30.342831  151436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:59:30.342884  151436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:59:30.363393  151436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34065
	I0729 18:59:30.363935  151436 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:59:30.364501  151436 main.go:141] libmachine: Using API Version  1
	I0729 18:59:30.364532  151436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:59:30.364929  151436 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:59:30.365140  151436 main.go:141] libmachine: (no-preload-524369) Calling .DriverName
	I0729 18:59:30.365406  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetState
	I0729 18:59:30.367095  151436 fix.go:112] recreateIfNeeded on no-preload-524369: state=Stopped err=<nil>
	I0729 18:59:30.367121  151436 main.go:141] libmachine: (no-preload-524369) Calling .DriverName
	W0729 18:59:30.367277  151436 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:59:30.368999  151436 out.go:177] * Restarting existing kvm2 VM for "no-preload-524369" ...
	I0729 18:59:28.438343  151772 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-612270" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:30.439879  151772 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-612270" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:29.606227  152077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:59:29.606269  152077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 18:59:29.606313  152077 buildroot.go:174] setting up certificates
	I0729 18:59:29.606326  152077 provision.go:84] configureAuth start
	I0729 18:59:29.606341  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetMachineName
	I0729 18:59:29.606655  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:59:29.609303  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.609706  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.609730  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.609861  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.612198  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.612587  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.612610  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.612731  152077 provision.go:143] copyHostCerts
	I0729 18:59:29.612780  152077 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 18:59:29.612789  152077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:59:29.612846  152077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 18:59:29.612964  152077 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 18:59:29.612976  152077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:59:29.612999  152077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 18:59:29.613054  152077 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 18:59:29.613061  152077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:59:29.613077  152077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 18:59:29.613123  152077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-834964 san=[127.0.0.1 192.168.61.89 localhost minikube old-k8s-version-834964]
	I0729 18:59:29.705910  152077 provision.go:177] copyRemoteCerts
	I0729 18:59:29.705976  152077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:59:29.706002  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.708478  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.708809  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.708845  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.709012  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:29.709191  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.709356  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:29.709462  152077 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:59:29.786569  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:59:29.810631  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:59:29.833915  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:59:29.857384  152077 provision.go:87] duration metric: took 251.042624ms to configureAuth
	I0729 18:59:29.857416  152077 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:59:29.857640  152077 config.go:182] Loaded profile config "old-k8s-version-834964": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:59:29.857738  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.860583  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.860937  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.860961  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.861218  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:29.861424  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.861551  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.861714  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:29.861845  152077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:29.862041  152077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:59:29.862061  152077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:59:30.113352  152077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:59:30.113379  152077 machine.go:97] duration metric: took 836.410672ms to provisionDockerMachine
	I0729 18:59:30.113393  152077 start.go:293] postStartSetup for "old-k8s-version-834964" (driver="kvm2")
	I0729 18:59:30.113406  152077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:59:30.113427  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:30.113736  152077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:59:30.113767  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:30.116368  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.116721  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:30.116747  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.116952  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:30.117148  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:30.117308  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:30.117414  152077 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:59:30.195069  152077 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:59:30.199201  152077 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:59:30.199219  152077 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:59:30.199279  152077 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:59:30.199374  152077 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:59:30.199479  152077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:59:30.208616  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:59:30.234943  152077 start.go:296] duration metric: took 121.530806ms for postStartSetup
	I0729 18:59:30.234985  152077 fix.go:56] duration metric: took 19.509472409s for fixHost
	I0729 18:59:30.235004  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:30.237789  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.238195  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:30.238226  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.238369  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:30.238535  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:30.238701  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:30.238892  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:30.239065  152077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:30.239288  152077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:59:30.239302  152077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:59:30.342059  152077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722279570.312960806
	
	I0729 18:59:30.342084  152077 fix.go:216] guest clock: 1722279570.312960806
	I0729 18:59:30.342092  152077 fix.go:229] Guest: 2024-07-29 18:59:30.312960806 +0000 UTC Remote: 2024-07-29 18:59:30.234988552 +0000 UTC m=+230.685193458 (delta=77.972254ms)
	I0729 18:59:30.342134  152077 fix.go:200] guest clock delta is within tolerance: 77.972254ms
	I0729 18:59:30.342145  152077 start.go:83] releasing machines lock for "old-k8s-version-834964", held for 19.616668039s
	I0729 18:59:30.342179  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:30.342502  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:59:30.345489  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.345885  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:30.345917  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.346038  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:30.346564  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:30.346761  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:30.346848  152077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:59:30.346899  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:30.347008  152077 ssh_runner.go:195] Run: cat /version.json
	I0729 18:59:30.347035  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:30.349621  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.349978  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.350056  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:30.350080  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.350214  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:30.350385  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:30.350466  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:30.350488  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.350563  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:30.350625  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:30.350737  152077 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:59:30.350811  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:30.350955  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:30.351110  152077 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:59:30.458405  152077 ssh_runner.go:195] Run: systemctl --version
	I0729 18:59:30.465636  152077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:59:30.614302  152077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:59:30.621254  152077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:59:30.621341  152077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:59:30.639929  152077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:59:30.639951  152077 start.go:495] detecting cgroup driver to use...
	I0729 18:59:30.640014  152077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:59:30.660286  152077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:59:30.680212  152077 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:59:30.680287  152077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:59:30.700782  152077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:59:30.722050  152077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:59:30.848624  152077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:59:31.014541  152077 docker.go:233] disabling docker service ...
	I0729 18:59:31.014633  152077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:59:31.030560  152077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:59:31.043240  152077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:59:31.182489  152077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:59:31.338661  152077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:59:31.353489  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:59:31.372958  152077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:59:31.373031  152077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:31.384674  152077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:59:31.384743  152077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:31.397732  152077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:31.408481  152077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:31.418983  152077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:59:31.430095  152077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:59:31.440316  152077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:59:31.440376  152077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:59:31.454369  152077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:59:31.464109  152077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:59:31.602010  152077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:59:31.776788  152077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:59:31.776884  152077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:59:31.783376  152077 start.go:563] Will wait 60s for crictl version
	I0729 18:59:31.783440  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:31.788335  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:59:31.835043  152077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:59:31.835137  152077 ssh_runner.go:195] Run: crio --version
	I0729 18:59:31.867407  152077 ssh_runner.go:195] Run: crio --version
	I0729 18:59:31.906757  152077 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:59:30.370331  151436 main.go:141] libmachine: (no-preload-524369) Calling .Start
	I0729 18:59:30.370476  151436 main.go:141] libmachine: (no-preload-524369) Ensuring networks are active...
	I0729 18:59:30.371302  151436 main.go:141] libmachine: (no-preload-524369) Ensuring network default is active
	I0729 18:59:30.371773  151436 main.go:141] libmachine: (no-preload-524369) Ensuring network mk-no-preload-524369 is active
	I0729 18:59:30.372226  151436 main.go:141] libmachine: (no-preload-524369) Getting domain xml...
	I0729 18:59:30.373463  151436 main.go:141] libmachine: (no-preload-524369) Creating domain...
	I0729 18:59:30.758836  151436 main.go:141] libmachine: (no-preload-524369) Waiting to get IP...
	I0729 18:59:30.760128  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:30.760719  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:30.760917  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:30.760783  153058 retry.go:31] will retry after 240.596072ms: waiting for machine to come up
	I0729 18:59:31.003356  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:31.003892  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:31.003912  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:31.003850  153058 retry.go:31] will retry after 337.273298ms: waiting for machine to come up
	I0729 18:59:31.342536  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:31.343291  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:31.343326  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:31.343217  153058 retry.go:31] will retry after 359.765388ms: waiting for machine to come up
	I0729 18:59:31.704947  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:31.705525  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:31.705552  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:31.705478  153058 retry.go:31] will retry after 485.778406ms: waiting for machine to come up
	I0729 18:59:32.193264  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:32.193848  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:32.193874  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:32.193798  153058 retry.go:31] will retry after 594.395961ms: waiting for machine to come up
	I0729 18:59:32.789969  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:32.790602  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:32.790636  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:32.790566  153058 retry.go:31] will retry after 676.18566ms: waiting for machine to come up
	I0729 18:59:33.468354  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:33.468995  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:33.469018  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:33.468899  153058 retry.go:31] will retry after 931.124971ms: waiting for machine to come up
	I0729 18:59:31.908229  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:59:31.911323  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:31.911752  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:31.911788  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:31.912046  152077 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 18:59:31.916244  152077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:59:31.932961  152077 kubeadm.go:883] updating cluster {Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:59:31.933091  152077 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:59:31.933152  152077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:59:31.994345  152077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:59:31.994433  152077 ssh_runner.go:195] Run: which lz4
	I0729 18:59:31.999099  152077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:59:32.003996  152077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:59:32.004036  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:59:33.668954  152077 crio.go:462] duration metric: took 1.669904838s to copy over tarball
	I0729 18:59:33.669039  152077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:59:32.939288  151772 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-612270" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:35.438858  151772 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-612270" in "kube-system" namespace has status "Ready":"True"
	I0729 18:59:35.438890  151772 pod_ready.go:81] duration metric: took 9.007198875s for pod "kube-scheduler-default-k8s-diff-port-612270" in "kube-system" namespace to be "Ready" ...
	I0729 18:59:35.438902  151772 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace to be "Ready" ...
	I0729 18:59:34.402166  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:34.402811  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:34.402845  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:34.402746  153058 retry.go:31] will retry after 1.41475957s: waiting for machine to come up
	I0729 18:59:35.819011  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:35.819545  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:35.819647  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:35.819552  153058 retry.go:31] will retry after 1.570889349s: waiting for machine to come up
	I0729 18:59:37.391646  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:37.392144  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:37.392172  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:37.392111  153058 retry.go:31] will retry after 2.02996597s: waiting for machine to come up
	I0729 18:59:36.583975  152077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914883435s)
	I0729 18:59:36.584005  152077 crio.go:469] duration metric: took 2.915018011s to extract the tarball
	I0729 18:59:36.584016  152077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:59:36.631515  152077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:59:36.667867  152077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:59:36.667896  152077 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:59:36.667964  152077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:36.668006  152077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:59:36.668011  152077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:59:36.668026  152077 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:59:36.667965  152077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:59:36.668009  152077 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:59:36.668080  152077 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:59:36.668040  152077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:59:36.669854  152077 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:59:36.669863  152077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:59:36.670066  152077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:59:36.670165  152077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:59:36.670165  152077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:59:36.670221  152077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:36.670243  152077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:59:36.670165  152077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:59:36.840898  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:59:36.843825  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:59:36.851242  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:59:36.856440  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:59:36.868504  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:59:36.889795  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:59:36.897786  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:59:36.948872  152077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:59:36.948919  152077 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:59:36.948933  152077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:59:36.948953  152077 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:59:36.948993  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:36.948993  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:36.982981  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:36.983833  152077 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:59:36.983868  152077 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:59:36.983903  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:37.051531  152077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:59:37.051573  152077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:59:37.051626  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:37.052794  152077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:59:37.052836  152077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:59:37.052894  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:37.052891  152077 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:59:37.052972  152077 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:59:37.052994  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:37.055958  152077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:59:37.055993  152077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:59:37.056027  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:37.056053  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:59:37.056102  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:59:37.207598  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:59:37.207636  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:59:37.207647  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:59:37.207700  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:59:37.207790  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:59:37.207816  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:59:37.207918  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:59:37.321353  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:59:37.323936  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:59:37.330697  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:59:37.330788  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:59:37.330848  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:59:37.330901  152077 cache_images.go:92] duration metric: took 662.990743ms to LoadCachedImages
	W0729 18:59:37.330994  152077 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 18:59:37.331012  152077 kubeadm.go:934] updating node { 192.168.61.89 8443 v1.20.0 crio true true} ...
	I0729 18:59:37.331174  152077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-834964 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:59:37.331244  152077 ssh_runner.go:195] Run: crio config
	I0729 18:59:37.379781  152077 cni.go:84] Creating CNI manager for ""
	I0729 18:59:37.379805  152077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:59:37.379821  152077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:59:37.379849  152077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-834964 NodeName:old-k8s-version-834964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:59:37.380041  152077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-834964"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:59:37.380121  152077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:59:37.390185  152077 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:59:37.390247  152077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:59:37.401455  152077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:59:37.419736  152077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:59:37.438017  152077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:59:37.457881  152077 ssh_runner.go:195] Run: grep 192.168.61.89	control-plane.minikube.internal$ /etc/hosts
	I0729 18:59:37.461878  152077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:59:37.475477  152077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:59:37.601386  152077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:59:37.630282  152077 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964 for IP: 192.168.61.89
	I0729 18:59:37.630309  152077 certs.go:194] generating shared ca certs ...
	I0729 18:59:37.630331  152077 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:59:37.630517  152077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 18:59:37.630574  152077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 18:59:37.630587  152077 certs.go:256] generating profile certs ...
	I0729 18:59:37.630717  152077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.key
	I0729 18:59:37.630789  152077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.key.34fbf854
	I0729 18:59:37.630855  152077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.key
	I0729 18:59:37.630995  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 18:59:37.631039  152077 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 18:59:37.631049  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:59:37.631077  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:59:37.631109  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:59:37.631141  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 18:59:37.631179  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:59:37.631894  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:59:37.670793  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:59:37.698962  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:59:37.723732  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:59:37.752005  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:59:37.791334  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:59:37.830038  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:59:37.860764  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:59:37.900015  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:59:37.924659  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 18:59:37.950049  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 18:59:37.974698  152077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:59:37.991903  152077 ssh_runner.go:195] Run: openssl version
	I0729 18:59:37.997823  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:59:38.009021  152077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:59:38.013905  152077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:59:38.014034  152077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:59:38.020663  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:59:38.032489  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 18:59:38.043992  152077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 18:59:38.050676  152077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:59:38.050753  152077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 18:59:38.056989  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 18:59:38.068418  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 18:59:38.080303  152077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 18:59:38.085665  152077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:59:38.085736  152077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 18:59:38.091430  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:59:38.105136  152077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:59:38.109647  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:59:38.115807  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:59:38.121672  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:59:38.128080  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:59:38.134195  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:59:38.140190  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:59:38.146051  152077 kubeadm.go:392] StartCluster: {Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:59:38.146162  152077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:59:38.146213  152077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:59:38.182889  152077 cri.go:89] found id: ""
	I0729 18:59:38.182989  152077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:59:38.193169  152077 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:59:38.193191  152077 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:59:38.193252  152077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:59:38.202493  152077 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:59:38.203291  152077 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-834964" does not appear in /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:59:38.203782  152077 kubeconfig.go:62] /home/jenkins/minikube-integration/19339-88081/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-834964" cluster setting kubeconfig missing "old-k8s-version-834964" context setting]
	I0729 18:59:38.204438  152077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:59:38.230408  152077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:59:38.243228  152077 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.89
	I0729 18:59:38.243262  152077 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:59:38.243276  152077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:59:38.243335  152077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:59:38.279296  152077 cri.go:89] found id: ""
	I0729 18:59:38.279380  152077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:59:38.296415  152077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:59:38.308152  152077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:59:38.308174  152077 kubeadm.go:157] found existing configuration files:
	
	I0729 18:59:38.308225  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:59:38.317135  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:59:38.317194  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:59:38.326564  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:59:38.336270  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:59:38.336337  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:59:38.345342  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:59:38.354548  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:59:38.354605  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:59:38.364166  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:59:38.373484  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:59:38.373533  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:59:38.383259  152077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:59:38.393125  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:38.532442  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:39.309448  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:39.560692  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:38.234589  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:40.445823  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:39.423555  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:39.424165  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:39.424198  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:39.424098  153058 retry.go:31] will retry after 2.362676466s: waiting for machine to come up
	I0729 18:59:41.788917  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:41.789485  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:41.789521  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:41.789381  153058 retry.go:31] will retry after 2.3795803s: waiting for machine to come up
	I0729 18:59:39.677689  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:39.773200  152077 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:59:39.773302  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:40.273962  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:40.773384  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:41.274085  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:41.773667  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:42.273638  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:42.774096  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:43.273549  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:43.773652  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:44.274085  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:42.945366  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:44.945732  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:44.171807  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:44.172238  151436 main.go:141] libmachine: (no-preload-524369) DBG | unable to find current IP address of domain no-preload-524369 in network mk-no-preload-524369
	I0729 18:59:44.172265  151436 main.go:141] libmachine: (no-preload-524369) DBG | I0729 18:59:44.172193  153058 retry.go:31] will retry after 3.861769177s: waiting for machine to come up
	I0729 18:59:48.037502  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.038021  151436 main.go:141] libmachine: (no-preload-524369) Found IP for machine: 192.168.72.7
	I0729 18:59:48.038047  151436 main.go:141] libmachine: (no-preload-524369) Reserving static IP address...
	I0729 18:59:48.038062  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has current primary IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.038466  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "no-preload-524369", mac: "52:54:00:16:73:ec", ip: "192.168.72.7"} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:48.038492  151436 main.go:141] libmachine: (no-preload-524369) Reserved static IP address: 192.168.72.7
	I0729 18:59:48.038513  151436 main.go:141] libmachine: (no-preload-524369) DBG | skip adding static IP to network mk-no-preload-524369 - found existing host DHCP lease matching {name: "no-preload-524369", mac: "52:54:00:16:73:ec", ip: "192.168.72.7"}
	I0729 18:59:48.038531  151436 main.go:141] libmachine: (no-preload-524369) DBG | Getting to WaitForSSH function...
	I0729 18:59:48.038543  151436 main.go:141] libmachine: (no-preload-524369) Waiting for SSH to be available...
	I0729 18:59:48.040594  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.040996  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:48.041025  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.041194  151436 main.go:141] libmachine: (no-preload-524369) DBG | Using SSH client type: external
	I0729 18:59:48.041220  151436 main.go:141] libmachine: (no-preload-524369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/no-preload-524369/id_rsa (-rw-------)
	I0729 18:59:48.041250  151436 main.go:141] libmachine: (no-preload-524369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/no-preload-524369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:59:48.041266  151436 main.go:141] libmachine: (no-preload-524369) DBG | About to run SSH command:
	I0729 18:59:48.041281  151436 main.go:141] libmachine: (no-preload-524369) DBG | exit 0
	I0729 18:59:48.168622  151436 main.go:141] libmachine: (no-preload-524369) DBG | SSH cmd err, output: <nil>: 
	I0729 18:59:48.168999  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetConfigRaw
	I0729 18:59:48.169662  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetIP
	I0729 18:59:48.172212  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.172581  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:48.172623  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.172824  151436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/config.json ...
	I0729 18:59:48.173031  151436 machine.go:94] provisionDockerMachine start ...
	I0729 18:59:48.173050  151436 main.go:141] libmachine: (no-preload-524369) Calling .DriverName
	I0729 18:59:48.173251  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:59:48.175382  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.175683  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:48.175711  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.175861  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHPort
	I0729 18:59:48.176021  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:48.176173  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:48.176281  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHUsername
	I0729 18:59:48.176399  151436 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:48.176596  151436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0729 18:59:48.176606  151436 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:59:48.285198  151436 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:59:48.285230  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetMachineName
	I0729 18:59:48.285490  151436 buildroot.go:166] provisioning hostname "no-preload-524369"
	I0729 18:59:48.285519  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetMachineName
	I0729 18:59:48.285726  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:59:48.288488  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.288889  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:48.288917  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.289021  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHPort
	I0729 18:59:48.289188  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:48.289369  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:48.289570  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHUsername
	I0729 18:59:48.289736  151436 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:48.289940  151436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0729 18:59:48.289955  151436 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-524369 && echo "no-preload-524369" | sudo tee /etc/hostname
	I0729 18:59:48.410577  151436 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-524369
	
	I0729 18:59:48.410605  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:59:48.413242  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.413534  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:48.413571  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.413703  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHPort
	I0729 18:59:48.413899  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:48.414045  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:48.414190  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHUsername
	I0729 18:59:48.414346  151436 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:48.414570  151436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0729 18:59:48.414595  151436 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-524369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-524369/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-524369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:59:48.533286  151436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:59:48.533320  151436 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 18:59:48.533338  151436 buildroot.go:174] setting up certificates
	I0729 18:59:48.533348  151436 provision.go:84] configureAuth start
	I0729 18:59:48.533360  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetMachineName
	I0729 18:59:48.533651  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetIP
	I0729 18:59:48.536231  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.536578  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:48.536609  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.536691  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:59:48.538641  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.538954  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:48.538980  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:48.539125  151436 provision.go:143] copyHostCerts
	I0729 18:59:48.539184  151436 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 18:59:48.539196  151436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:59:48.539264  151436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 18:59:48.539390  151436 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 18:59:48.539401  151436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:59:48.539430  151436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 18:59:48.539503  151436 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 18:59:48.539513  151436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:59:48.539542  151436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 18:59:48.539616  151436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.no-preload-524369 san=[127.0.0.1 192.168.72.7 localhost minikube no-preload-524369]
	I0729 18:59:44.773401  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:45.274278  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:45.773998  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:46.273669  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:46.773390  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:47.273729  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:47.773855  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:48.273869  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:48.773703  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:49.273532  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:48.998479  151436 provision.go:177] copyRemoteCerts
	I0729 18:59:48.998551  151436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:59:48.998575  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:59:49.001366  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.001690  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:49.001718  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.001925  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHPort
	I0729 18:59:49.002135  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:49.002313  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHUsername
	I0729 18:59:49.002483  151436 sshutil.go:53] new ssh client: &{IP:192.168.72.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/no-preload-524369/id_rsa Username:docker}
	I0729 18:59:49.087198  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:59:49.112229  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 18:59:49.135067  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:59:49.158120  151436 provision.go:87] duration metric: took 624.758057ms to configureAuth
	I0729 18:59:49.158149  151436 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:59:49.158336  151436 config.go:182] Loaded profile config "no-preload-524369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:59:49.158408  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:59:49.161156  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.161538  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:49.161556  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.161745  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHPort
	I0729 18:59:49.161946  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:49.162154  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:49.162320  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHUsername
	I0729 18:59:49.162481  151436 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:49.162711  151436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0729 18:59:49.162732  151436 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:59:49.438760  151436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:59:49.438795  151436 machine.go:97] duration metric: took 1.265749908s to provisionDockerMachine
	I0729 18:59:49.438809  151436 start.go:293] postStartSetup for "no-preload-524369" (driver="kvm2")
	I0729 18:59:49.438827  151436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:59:49.438848  151436 main.go:141] libmachine: (no-preload-524369) Calling .DriverName
	I0729 18:59:49.439173  151436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:59:49.439203  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:59:49.442058  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.442458  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:49.442488  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.442682  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHPort
	I0729 18:59:49.442870  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:49.443042  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHUsername
	I0729 18:59:49.443210  151436 sshutil.go:53] new ssh client: &{IP:192.168.72.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/no-preload-524369/id_rsa Username:docker}
	I0729 18:59:49.531760  151436 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:59:49.536071  151436 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:59:49.536092  151436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:59:49.536164  151436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:59:49.536263  151436 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:59:49.536382  151436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:59:49.546066  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:59:49.570108  151436 start.go:296] duration metric: took 131.286186ms for postStartSetup
	I0729 18:59:49.570142  151436 fix.go:56] duration metric: took 19.227781266s for fixHost
	I0729 18:59:49.570162  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:59:49.572808  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.573188  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:49.573219  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.573360  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHPort
	I0729 18:59:49.573550  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:49.573706  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:49.573796  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHUsername
	I0729 18:59:49.573979  151436 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:49.574138  151436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.7 22 <nil> <nil>}
	I0729 18:59:49.574147  151436 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:59:49.685399  151436 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722279589.654985390
	
	I0729 18:59:49.685421  151436 fix.go:216] guest clock: 1722279589.654985390
	I0729 18:59:49.685428  151436 fix.go:229] Guest: 2024-07-29 18:59:49.65498539 +0000 UTC Remote: 2024-07-29 18:59:49.570146436 +0000 UTC m=+335.889824326 (delta=84.838954ms)
	I0729 18:59:49.685449  151436 fix.go:200] guest clock delta is within tolerance: 84.838954ms
	I0729 18:59:49.685454  151436 start.go:83] releasing machines lock for "no-preload-524369", held for 19.343171265s
	I0729 18:59:49.685470  151436 main.go:141] libmachine: (no-preload-524369) Calling .DriverName
	I0729 18:59:49.685701  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetIP
	I0729 18:59:49.688373  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.688714  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:49.688739  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.688910  151436 main.go:141] libmachine: (no-preload-524369) Calling .DriverName
	I0729 18:59:49.689361  151436 main.go:141] libmachine: (no-preload-524369) Calling .DriverName
	I0729 18:59:49.689527  151436 main.go:141] libmachine: (no-preload-524369) Calling .DriverName
	I0729 18:59:49.689609  151436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:59:49.689651  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:59:49.689781  151436 ssh_runner.go:195] Run: cat /version.json
	I0729 18:59:49.689808  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:59:49.692177  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.692277  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.692545  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:49.692572  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.692610  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:49.692626  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:49.692685  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHPort
	I0729 18:59:49.692897  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHPort
	I0729 18:59:49.692901  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:49.693076  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:59:49.693080  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHUsername
	I0729 18:59:49.693199  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHUsername
	I0729 18:59:49.693247  151436 sshutil.go:53] new ssh client: &{IP:192.168.72.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/no-preload-524369/id_rsa Username:docker}
	I0729 18:59:49.693310  151436 sshutil.go:53] new ssh client: &{IP:192.168.72.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/no-preload-524369/id_rsa Username:docker}
	I0729 18:59:49.794072  151436 ssh_runner.go:195] Run: systemctl --version
	I0729 18:59:49.800089  151436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:59:49.944915  151436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:59:49.951514  151436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:59:49.951575  151436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:59:49.968610  151436 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:59:49.968634  151436 start.go:495] detecting cgroup driver to use...
	I0729 18:59:49.968689  151436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:59:49.984669  151436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:59:49.998171  151436 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:59:49.998218  151436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:59:50.012289  151436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:59:50.025429  151436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:59:50.135151  151436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:59:50.278117  151436 docker.go:233] disabling docker service ...
	I0729 18:59:50.278192  151436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:59:50.295293  151436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:59:50.310517  151436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:59:50.463414  151436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:59:50.583050  151436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:59:50.598125  151436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:59:50.617824  151436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 18:59:50.617894  151436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:50.627838  151436 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:59:50.627904  151436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:50.637888  151436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:50.647595  151436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:50.657447  151436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:59:50.667551  151436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:50.677286  151436 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:50.695918  151436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:50.706316  151436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:59:50.715876  151436 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:59:50.715924  151436 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:59:50.729845  151436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:59:50.739608  151436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:59:50.857056  151436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:59:50.993131  151436 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:59:50.993201  151436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:59:50.998323  151436 start.go:563] Will wait 60s for crictl version
	I0729 18:59:50.998392  151436 ssh_runner.go:195] Run: which crictl
	I0729 18:59:51.002290  151436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:59:51.046910  151436 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:59:51.046988  151436 ssh_runner.go:195] Run: crio --version
	I0729 18:59:51.076328  151436 ssh_runner.go:195] Run: crio --version
	I0729 18:59:51.107727  151436 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 18:59:47.444526  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:49.445932  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:51.109225  151436 main.go:141] libmachine: (no-preload-524369) Calling .GetIP
	I0729 18:59:51.111667  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:51.111980  151436 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:59:41 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:59:51.112014  151436 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:59:51.112173  151436 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 18:59:51.116252  151436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:59:51.129044  151436 kubeadm.go:883] updating cluster {Name:no-preload-524369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-524369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.7 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:59:51.129155  151436 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 18:59:51.129202  151436 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:59:51.163387  151436 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 18:59:51.163420  151436 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:59:51.163459  151436 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:51.163475  151436 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:59:51.163521  151436 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:59:51.163544  151436 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:59:51.163565  151436 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:59:51.163632  151436 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:59:51.163690  151436 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:59:51.163781  151436 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 18:59:51.165032  151436 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:51.165050  151436 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:59:51.165055  151436 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:59:51.165069  151436 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:59:51.165060  151436 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:59:51.165055  151436 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 18:59:51.165108  151436 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:59:51.165345  151436 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:59:51.337434  151436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 18:59:51.340144  151436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:59:51.347997  151436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:59:51.356232  151436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 18:59:51.366078  151436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:59:51.397807  151436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:59:51.399680  151436 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 18:59:51.399733  151436 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 18:59:51.399774  151436 ssh_runner.go:195] Run: which crictl
	I0729 18:59:51.401534  151436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:59:51.444494  151436 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 18:59:51.444546  151436 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:59:51.444594  151436 ssh_runner.go:195] Run: which crictl
	I0729 18:59:51.496409  151436 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 18:59:51.496455  151436 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:59:51.496506  151436 ssh_runner.go:195] Run: which crictl
	I0729 18:59:51.496569  151436 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 18:59:51.496615  151436 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:59:51.496652  151436 ssh_runner.go:195] Run: which crictl
	I0729 18:59:51.510539  151436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 18:59:51.510647  151436 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 18:59:51.510693  151436 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:59:51.510742  151436 ssh_runner.go:195] Run: which crictl
	I0729 18:59:51.526011  151436 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 18:59:51.526069  151436 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:59:51.526110  151436 ssh_runner.go:195] Run: which crictl
	I0729 18:59:51.526117  151436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 18:59:51.526157  151436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 18:59:51.526174  151436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 18:59:51.530091  151436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:51.582945  151436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 18:59:51.583083  151436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 18:59:51.583192  151436 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:59:51.639262  151436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 18:59:51.639284  151436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 18:59:51.639359  151436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 18:59:51.639359  151436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 18:59:51.639398  151436 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 18:59:51.639438  151436 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:51.639368  151436 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:59:51.639451  151436 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:59:51.639453  151436 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:59:51.639474  151436 ssh_runner.go:195] Run: which crictl
	I0729 18:59:51.680880  151436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 18:59:51.680934  151436 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 18:59:51.680954  151436 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:59:51.680997  151436 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 18:59:51.681004  151436 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 18:59:51.703282  151436 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 18:59:51.703370  151436 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 18:59:51.703370  151436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 18:59:51.703404  151436 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 18:59:51.703471  151436 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:59:51.703488  151436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:51.703470  151436 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 18:59:49.774260  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:50.273544  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:50.774284  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:51.274389  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:51.774063  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:52.274103  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:52.774063  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:53.274140  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:53.773533  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:54.274045  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:51.945692  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:54.445291  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:56.445777  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:55.410165  151436 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.706646359s)
	I0729 18:59:55.410204  151436 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (3.70670231s)
	I0729 18:59:55.410225  151436 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.729206553s)
	I0729 18:59:55.410228  151436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 18:59:55.410237  151436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 18:59:55.410249  151436 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 18:59:55.410270  151436 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:59:55.410315  151436 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 18:59:55.410339  151436 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 18:59:55.416035  151436 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 18:59:57.299238  151436 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.888876929s)
	I0729 18:59:57.299272  151436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 18:59:57.299309  151436 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:59:57.299367  151436 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 18:59:54.774107  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:55.274068  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:55.773381  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:56.274102  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:56.773461  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:57.274039  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:57.774105  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:58.274395  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:58.774088  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:59.273822  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:58.445916  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:00.945508  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 18:59:59.255362  151436 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.955964677s)
	I0729 18:59:59.255415  151436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 18:59:59.255457  151436 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 18:59:59.255522  151436 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:00:00.611967  151436 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.356413459s)
	I0729 19:00:00.611999  151436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 19:00:00.612029  151436 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:00:00.612080  151436 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:00:02.372368  151436 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.760260788s)
	I0729 19:00:02.372408  151436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 19:00:02.372434  151436 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:00:02.372473  151436 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 18:59:59.774344  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:00.274074  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:00.773606  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:01.273454  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:01.773551  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:02.273747  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:02.773849  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:03.273732  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:03.773484  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:04.274361  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:03.445509  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:05.445981  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:04.545771  151436 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.173267041s)
	I0729 19:00:04.545808  151436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 19:00:04.545835  151436 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:00:04.545886  151436 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:00:05.193501  151436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 19:00:05.193550  151436 cache_images.go:123] Successfully loaded all cached images
	I0729 19:00:05.193558  151436 cache_images.go:92] duration metric: took 14.030126845s to LoadCachedImages
	I0729 19:00:05.193575  151436 kubeadm.go:934] updating node { 192.168.72.7 8443 v1.31.0-beta.0 crio true true} ...
	I0729 19:00:05.193724  151436 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-524369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-524369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:00:05.193818  151436 ssh_runner.go:195] Run: crio config
	I0729 19:00:05.242318  151436 cni.go:84] Creating CNI manager for ""
	I0729 19:00:05.242339  151436 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:00:05.242351  151436 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:00:05.242381  151436 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.7 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-524369 NodeName:no-preload-524369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:00:05.242698  151436 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-524369"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:00:05.242786  151436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 19:00:05.255315  151436 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:00:05.255385  151436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:00:05.265444  151436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0729 19:00:05.283481  151436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 19:00:05.300742  151436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 19:00:05.317638  151436 ssh_runner.go:195] Run: grep 192.168.72.7	control-plane.minikube.internal$ /etc/hosts
	I0729 19:00:05.321346  151436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:00:05.333260  151436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:00:05.459413  151436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:00:05.476778  151436 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369 for IP: 192.168.72.7
	I0729 19:00:05.476803  151436 certs.go:194] generating shared ca certs ...
	I0729 19:00:05.476825  151436 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:00:05.477030  151436 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 19:00:05.477085  151436 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 19:00:05.477098  151436 certs.go:256] generating profile certs ...
	I0729 19:00:05.477206  151436 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/client.key
	I0729 19:00:05.477294  151436 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/apiserver.key.d0294554
	I0729 19:00:05.477343  151436 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/proxy-client.key
	I0729 19:00:05.477484  151436 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 19:00:05.477527  151436 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 19:00:05.477541  151436 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 19:00:05.477581  151436 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 19:00:05.477644  151436 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:00:05.477684  151436 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 19:00:05.477751  151436 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:00:05.478620  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:00:05.513782  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 19:00:05.544592  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:00:05.579772  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 19:00:05.611931  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 19:00:05.640222  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:00:05.678141  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:00:05.701561  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:00:05.726034  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:00:05.749151  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 19:00:05.771679  151436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 19:00:05.796583  151436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:00:05.812986  151436 ssh_runner.go:195] Run: openssl version
	I0729 19:00:05.818649  151436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:00:05.829482  151436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:00:05.833698  151436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:00:05.833746  151436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:00:05.839403  151436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:00:05.850019  151436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 19:00:05.860712  151436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 19:00:05.864999  151436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 19:00:05.865053  151436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 19:00:05.870526  151436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 19:00:05.881034  151436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 19:00:05.891789  151436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 19:00:05.896033  151436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 19:00:05.896093  151436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 19:00:05.901731  151436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:00:05.912251  151436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:00:05.916474  151436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:00:05.922109  151436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:00:05.927675  151436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:00:05.933364  151436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:00:05.938895  151436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:00:05.944794  151436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:00:05.950647  151436 kubeadm.go:392] StartCluster: {Name:no-preload-524369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-524369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.7 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:00:05.950722  151436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:00:05.950777  151436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:00:05.988688  151436 cri.go:89] found id: ""
	I0729 19:00:05.988756  151436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:00:05.999426  151436 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:00:05.999449  151436 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:00:05.999497  151436 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:00:06.009872  151436 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:00:06.010786  151436 kubeconfig.go:125] found "no-preload-524369" server: "https://192.168.72.7:8443"
	I0729 19:00:06.012587  151436 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:00:06.022773  151436 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.7
	I0729 19:00:06.022809  151436 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:00:06.022822  151436 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:00:06.022881  151436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:00:06.060654  151436 cri.go:89] found id: ""
	I0729 19:00:06.060723  151436 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:00:06.078207  151436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:00:06.087749  151436 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:00:06.087766  151436 kubeadm.go:157] found existing configuration files:
	
	I0729 19:00:06.087803  151436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:00:06.096795  151436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:00:06.096837  151436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:00:06.106080  151436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:00:06.115196  151436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:00:06.115250  151436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:00:06.124715  151436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:00:06.133631  151436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:00:06.133683  151436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:00:06.142694  151436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:00:06.151353  151436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:00:06.151408  151436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:00:06.160622  151436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:00:06.169980  151436 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:00:06.283837  151436 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:00:07.073734  151436 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:00:07.276968  151436 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:00:07.352289  151436 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:00:07.454239  151436 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:00:07.454326  151436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:07.954809  151436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:08.455446  151436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:08.472567  151436 api_server.go:72] duration metric: took 1.018327586s to wait for apiserver process to appear ...
	I0729 19:00:08.472596  151436 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:00:08.472618  151436 api_server.go:253] Checking apiserver healthz at https://192.168.72.7:8443/healthz ...
	I0729 19:00:08.473141  151436 api_server.go:269] stopped: https://192.168.72.7:8443/healthz: Get "https://192.168.72.7:8443/healthz": dial tcp 192.168.72.7:8443: connect: connection refused
	I0729 19:00:04.773330  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:05.274258  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:05.773922  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:06.273449  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:06.774301  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:07.274401  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:07.773732  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:08.274173  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:08.773487  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:09.273473  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:07.944774  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:09.945098  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:08.973171  151436 api_server.go:253] Checking apiserver healthz at https://192.168.72.7:8443/healthz ...
	I0729 19:00:11.392092  151436 api_server.go:279] https://192.168.72.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:00:11.392131  151436 api_server.go:103] status: https://192.168.72.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:00:11.392145  151436 api_server.go:253] Checking apiserver healthz at https://192.168.72.7:8443/healthz ...
	I0729 19:00:11.562324  151436 api_server.go:279] https://192.168.72.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:00:11.562359  151436 api_server.go:103] status: https://192.168.72.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:00:11.562379  151436 api_server.go:253] Checking apiserver healthz at https://192.168.72.7:8443/healthz ...
	I0729 19:00:11.580385  151436 api_server.go:279] https://192.168.72.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:00:11.580419  151436 api_server.go:103] status: https://192.168.72.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:00:11.972756  151436 api_server.go:253] Checking apiserver healthz at https://192.168.72.7:8443/healthz ...
	I0729 19:00:11.978846  151436 api_server.go:279] https://192.168.72.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:00:11.978879  151436 api_server.go:103] status: https://192.168.72.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:00:12.473469  151436 api_server.go:253] Checking apiserver healthz at https://192.168.72.7:8443/healthz ...
	I0729 19:00:12.479607  151436 api_server.go:279] https://192.168.72.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:00:12.479639  151436 api_server.go:103] status: https://192.168.72.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:00:12.972985  151436 api_server.go:253] Checking apiserver healthz at https://192.168.72.7:8443/healthz ...
	I0729 19:00:12.977839  151436 api_server.go:279] https://192.168.72.7:8443/healthz returned 200:
	ok
	I0729 19:00:12.984758  151436 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:00:12.984795  151436 api_server.go:131] duration metric: took 4.512189385s to wait for apiserver health ...
	I0729 19:00:12.984807  151436 cni.go:84] Creating CNI manager for ""
	I0729 19:00:12.984817  151436 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:00:12.986743  151436 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:00:12.988328  151436 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:00:12.999391  151436 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:00:13.023047  151436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:00:13.032609  151436 system_pods.go:59] 8 kube-system pods found
	I0729 19:00:13.032640  151436 system_pods.go:61] "coredns-5cfdc65f69-8xrtn" [e052ba13-0167-4afd-965d-fdc87a476273] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:00:13.032647  151436 system_pods.go:61] "etcd-no-preload-524369" [5b18c61b-663d-4701-b54e-69873946e9bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:00:13.032660  151436 system_pods.go:61] "kube-apiserver-no-preload-524369" [15fd56b8-6c41-459a-bde4-eb7e40cf37fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:00:13.032666  151436 system_pods.go:61] "kube-controller-manager-no-preload-524369" [69f9b003-1d33-46af-afa6-d0a17e3a2db9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:00:13.032671  151436 system_pods.go:61] "kube-proxy-x9chl" [81b661ac-8fbc-47aa-a528-68f00a89a6eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:00:13.032677  151436 system_pods.go:61] "kube-scheduler-no-preload-524369" [352196e5-bb61-4848-b8ab-f20b46b32647] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:00:13.032685  151436 system_pods.go:61] "metrics-server-78fcd8795b-tscl7" [6010f9f0-71ff-43ec-817f-afa7f0eeb856] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:00:13.032692  151436 system_pods.go:61] "storage-provisioner" [c7250b7c-c3a2-40bf-b6ee-fc57b62d6654] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:00:13.032702  151436 system_pods.go:74] duration metric: took 9.634331ms to wait for pod list to return data ...
	I0729 19:00:13.032711  151436 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:00:13.036473  151436 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:00:13.036506  151436 node_conditions.go:123] node cpu capacity is 2
	I0729 19:00:13.036522  151436 node_conditions.go:105] duration metric: took 3.804937ms to run NodePressure ...
	I0729 19:00:13.036545  151436 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:00:13.367863  151436 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:00:13.373362  151436 kubeadm.go:739] kubelet initialised
	I0729 19:00:13.373382  151436 kubeadm.go:740] duration metric: took 5.492452ms waiting for restarted kubelet to initialise ...
	I0729 19:00:13.373392  151436 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:00:13.380629  151436 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-8xrtn" in "kube-system" namespace to be "Ready" ...
	I0729 19:00:13.386765  151436 pod_ready.go:97] node "no-preload-524369" hosting pod "coredns-5cfdc65f69-8xrtn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-524369" has status "Ready":"False"
	I0729 19:00:13.386786  151436 pod_ready.go:81] duration metric: took 6.134462ms for pod "coredns-5cfdc65f69-8xrtn" in "kube-system" namespace to be "Ready" ...
	E0729 19:00:13.386795  151436 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-524369" hosting pod "coredns-5cfdc65f69-8xrtn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-524369" has status "Ready":"False"
	I0729 19:00:13.386802  151436 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-524369" in "kube-system" namespace to be "Ready" ...
	I0729 19:00:13.391761  151436 pod_ready.go:97] node "no-preload-524369" hosting pod "etcd-no-preload-524369" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-524369" has status "Ready":"False"
	I0729 19:00:13.391781  151436 pod_ready.go:81] duration metric: took 4.968639ms for pod "etcd-no-preload-524369" in "kube-system" namespace to be "Ready" ...
	E0729 19:00:13.391791  151436 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-524369" hosting pod "etcd-no-preload-524369" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-524369" has status "Ready":"False"
	I0729 19:00:13.391798  151436 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-524369" in "kube-system" namespace to be "Ready" ...
	I0729 19:00:13.396940  151436 pod_ready.go:97] node "no-preload-524369" hosting pod "kube-apiserver-no-preload-524369" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-524369" has status "Ready":"False"
	I0729 19:00:13.396960  151436 pod_ready.go:81] duration metric: took 5.154089ms for pod "kube-apiserver-no-preload-524369" in "kube-system" namespace to be "Ready" ...
	E0729 19:00:13.396970  151436 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-524369" hosting pod "kube-apiserver-no-preload-524369" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-524369" has status "Ready":"False"
	I0729 19:00:13.396977  151436 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-524369" in "kube-system" namespace to be "Ready" ...
	I0729 19:00:13.427047  151436 pod_ready.go:97] node "no-preload-524369" hosting pod "kube-controller-manager-no-preload-524369" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-524369" has status "Ready":"False"
	I0729 19:00:13.427074  151436 pod_ready.go:81] duration metric: took 30.08704ms for pod "kube-controller-manager-no-preload-524369" in "kube-system" namespace to be "Ready" ...
	E0729 19:00:13.427084  151436 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-524369" hosting pod "kube-controller-manager-no-preload-524369" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-524369" has status "Ready":"False"
	I0729 19:00:13.427089  151436 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x9chl" in "kube-system" namespace to be "Ready" ...
	I0729 19:00:09.773708  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:10.274054  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:10.774168  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:11.274093  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:11.774054  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:12.274363  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:12.774120  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:13.274081  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:13.773555  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:14.274061  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:12.445081  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:14.946305  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:13.827022  151436 pod_ready.go:92] pod "kube-proxy-x9chl" in "kube-system" namespace has status "Ready":"True"
	I0729 19:00:13.827048  151436 pod_ready.go:81] duration metric: took 399.948791ms for pod "kube-proxy-x9chl" in "kube-system" namespace to be "Ready" ...
	I0729 19:00:13.827060  151436 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-524369" in "kube-system" namespace to be "Ready" ...
	I0729 19:00:15.832706  151436 pod_ready.go:102] pod "kube-scheduler-no-preload-524369" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:17.833977  151436 pod_ready.go:102] pod "kube-scheduler-no-preload-524369" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:14.773600  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:15.274094  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:15.774239  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:16.273651  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:16.773467  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:17.273714  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:17.773832  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:18.273382  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:18.773798  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:19.273832  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:17.445878  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:19.447802  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:18.833219  151436 pod_ready.go:92] pod "kube-scheduler-no-preload-524369" in "kube-system" namespace has status "Ready":"True"
	I0729 19:00:18.833243  151436 pod_ready.go:81] duration metric: took 5.006174421s for pod "kube-scheduler-no-preload-524369" in "kube-system" namespace to be "Ready" ...
	I0729 19:00:18.833255  151436 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace to be "Ready" ...
	I0729 19:00:20.839426  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:22.840642  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:19.773386  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:20.274067  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:20.774073  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:21.274066  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:21.773468  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:22.274072  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:22.773775  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:23.274078  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:23.774074  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:24.273444  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:21.946936  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:24.445047  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:25.340191  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:27.839058  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:24.774273  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:25.273450  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:25.773595  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:26.273427  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:26.773353  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:27.274332  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:27.773884  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:28.273365  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:28.774166  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:29.273960  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:26.945315  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:29.444546  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:31.449184  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:29.841522  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:32.339563  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:29.773369  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:30.273412  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:30.773846  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:31.274110  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:31.773869  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:32.273833  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:32.773807  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:33.274079  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:33.773718  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:34.274389  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:33.948147  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:36.445298  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:34.340494  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:36.839693  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:34.774252  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:35.273526  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:35.774031  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:36.273954  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:36.773765  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:37.273786  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:37.774233  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:38.273605  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:38.773655  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:39.274064  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:38.945104  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:40.946064  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:38.839979  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:41.340839  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:39.773416  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:39.773516  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:39.814400  152077 cri.go:89] found id: ""
	I0729 19:00:39.814426  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.814435  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:39.814441  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:39.814495  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:39.850437  152077 cri.go:89] found id: ""
	I0729 19:00:39.850466  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.850478  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:39.850486  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:39.850550  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:39.886841  152077 cri.go:89] found id: ""
	I0729 19:00:39.886877  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.886889  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:39.886898  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:39.886962  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:39.921450  152077 cri.go:89] found id: ""
	I0729 19:00:39.921483  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.921498  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:39.921508  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:39.921574  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:39.959364  152077 cri.go:89] found id: ""
	I0729 19:00:39.959390  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.959398  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:39.959404  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:39.959461  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:39.995074  152077 cri.go:89] found id: ""
	I0729 19:00:39.995101  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.995112  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:39.995121  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:39.995185  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:40.033101  152077 cri.go:89] found id: ""
	I0729 19:00:40.033131  152077 logs.go:276] 0 containers: []
	W0729 19:00:40.033146  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:40.033154  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:40.033217  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:40.069273  152077 cri.go:89] found id: ""
	I0729 19:00:40.069301  152077 logs.go:276] 0 containers: []
	W0729 19:00:40.069311  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:40.069326  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:40.069344  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:40.121473  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:40.121511  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:40.136267  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:40.136300  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:40.255325  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:40.255347  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:40.255365  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:40.322460  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:40.322497  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:42.862734  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:42.876011  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:42.876075  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:42.915807  152077 cri.go:89] found id: ""
	I0729 19:00:42.915836  152077 logs.go:276] 0 containers: []
	W0729 19:00:42.915845  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:42.915856  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:42.915916  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:42.961500  152077 cri.go:89] found id: ""
	I0729 19:00:42.961535  152077 logs.go:276] 0 containers: []
	W0729 19:00:42.961546  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:42.961553  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:42.961617  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:43.006788  152077 cri.go:89] found id: ""
	I0729 19:00:43.006831  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.006843  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:43.006852  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:43.006909  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:43.054235  152077 cri.go:89] found id: ""
	I0729 19:00:43.054266  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.054277  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:43.054285  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:43.054347  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:43.093134  152077 cri.go:89] found id: ""
	I0729 19:00:43.093161  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.093170  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:43.093176  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:43.093225  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:43.128632  152077 cri.go:89] found id: ""
	I0729 19:00:43.128661  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.128670  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:43.128676  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:43.128735  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:43.164470  152077 cri.go:89] found id: ""
	I0729 19:00:43.164495  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.164503  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:43.164509  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:43.164565  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:43.198401  152077 cri.go:89] found id: ""
	I0729 19:00:43.198433  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.198444  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:43.198457  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:43.198474  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:43.211431  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:43.211456  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:43.298317  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:43.298346  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:43.298367  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:43.372987  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:43.373023  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:43.411907  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:43.411935  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:43.445197  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:45.445348  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:43.839704  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:45.840132  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:48.339491  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:45.964405  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:45.979422  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:45.979490  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:46.019631  152077 cri.go:89] found id: ""
	I0729 19:00:46.019658  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.019666  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:46.019672  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:46.019722  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:46.060112  152077 cri.go:89] found id: ""
	I0729 19:00:46.060141  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.060149  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:46.060155  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:46.060222  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:46.095008  152077 cri.go:89] found id: ""
	I0729 19:00:46.095036  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.095046  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:46.095054  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:46.095123  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:46.136824  152077 cri.go:89] found id: ""
	I0729 19:00:46.136850  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.136874  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:46.136883  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:46.136944  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:46.175572  152077 cri.go:89] found id: ""
	I0729 19:00:46.175597  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.175606  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:46.175612  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:46.175662  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:46.212359  152077 cri.go:89] found id: ""
	I0729 19:00:46.212394  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.212409  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:46.212418  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:46.212482  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:46.250722  152077 cri.go:89] found id: ""
	I0729 19:00:46.250757  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.250768  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:46.250776  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:46.250846  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:46.284967  152077 cri.go:89] found id: ""
	I0729 19:00:46.284992  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.285006  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:46.285015  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:46.285027  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:46.337522  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:46.337553  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:46.350965  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:46.350992  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:46.423899  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:46.423924  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:46.423947  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:46.500612  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:46.500651  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:49.039471  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:49.054210  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:49.054278  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:49.094352  152077 cri.go:89] found id: ""
	I0729 19:00:49.094377  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.094385  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:49.094393  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:49.094450  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:49.134527  152077 cri.go:89] found id: ""
	I0729 19:00:49.134558  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.134569  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:49.134577  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:49.134646  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:49.172752  152077 cri.go:89] found id: ""
	I0729 19:00:49.172783  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.172797  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:49.172805  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:49.172900  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:49.206900  152077 cri.go:89] found id: ""
	I0729 19:00:49.206923  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.206931  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:49.206937  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:49.206998  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:49.241708  152077 cri.go:89] found id: ""
	I0729 19:00:49.241736  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.241745  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:49.241751  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:49.241803  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:49.279727  152077 cri.go:89] found id: ""
	I0729 19:00:49.279757  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.279768  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:49.279776  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:49.279842  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:49.313695  152077 cri.go:89] found id: ""
	I0729 19:00:49.313722  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.313731  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:49.313737  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:49.313795  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:49.351878  152077 cri.go:89] found id: ""
	I0729 19:00:49.351910  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.351920  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:49.351932  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:49.351946  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:49.364944  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:49.364971  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:49.433729  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:49.433756  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:49.433771  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:49.513965  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:49.514002  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:49.555427  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:49.555459  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:47.946595  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:50.445268  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:50.340373  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:52.839167  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:52.108824  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:52.122490  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:52.122568  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:52.158170  152077 cri.go:89] found id: ""
	I0729 19:00:52.158202  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.158214  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:52.158222  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:52.158288  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:52.192916  152077 cri.go:89] found id: ""
	I0729 19:00:52.192947  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.192959  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:52.192967  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:52.193040  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:52.225783  152077 cri.go:89] found id: ""
	I0729 19:00:52.225815  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.225826  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:52.225834  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:52.225899  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:52.265368  152077 cri.go:89] found id: ""
	I0729 19:00:52.265395  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.265406  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:52.265413  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:52.265473  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:52.299857  152077 cri.go:89] found id: ""
	I0729 19:00:52.299904  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.299915  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:52.299923  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:52.299992  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:52.338117  152077 cri.go:89] found id: ""
	I0729 19:00:52.338143  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.338154  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:52.338162  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:52.338222  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:52.372237  152077 cri.go:89] found id: ""
	I0729 19:00:52.372261  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.372269  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:52.372275  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:52.372324  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:52.409303  152077 cri.go:89] found id: ""
	I0729 19:00:52.409329  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.409337  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:52.409347  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:52.409360  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:52.460746  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:52.460777  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:52.474486  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:52.474515  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:52.553416  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:52.553438  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:52.553455  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:52.638968  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:52.639015  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:52.944877  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:54.945138  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:54.840222  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:57.339763  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:55.179242  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:55.192550  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:55.192610  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:55.228887  152077 cri.go:89] found id: ""
	I0729 19:00:55.228917  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.228925  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:55.228930  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:55.228989  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:55.266646  152077 cri.go:89] found id: ""
	I0729 19:00:55.266679  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.266690  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:55.266697  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:55.266758  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:55.307050  152077 cri.go:89] found id: ""
	I0729 19:00:55.307090  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.307102  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:55.307110  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:55.307172  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:55.343778  152077 cri.go:89] found id: ""
	I0729 19:00:55.343806  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.343817  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:55.343824  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:55.343892  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:55.378481  152077 cri.go:89] found id: ""
	I0729 19:00:55.378512  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.378524  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:55.378532  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:55.378593  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:55.412401  152077 cri.go:89] found id: ""
	I0729 19:00:55.412432  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.412445  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:55.412452  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:55.412516  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:55.447365  152077 cri.go:89] found id: ""
	I0729 19:00:55.447392  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.447400  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:55.447406  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:55.447452  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:55.482482  152077 cri.go:89] found id: ""
	I0729 19:00:55.482506  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.482515  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:55.482526  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:55.482541  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:55.552333  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:55.552361  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:55.552379  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:55.632588  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:55.632626  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:55.674827  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:55.674865  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:55.728009  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:55.728054  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:58.243181  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:58.256700  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:58.256762  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:58.291952  152077 cri.go:89] found id: ""
	I0729 19:00:58.291979  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.291989  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:58.291995  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:58.292055  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:58.325824  152077 cri.go:89] found id: ""
	I0729 19:00:58.325858  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.325869  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:58.325877  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:58.325934  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:58.359100  152077 cri.go:89] found id: ""
	I0729 19:00:58.359130  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.359142  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:58.359149  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:58.359236  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:58.390409  152077 cri.go:89] found id: ""
	I0729 19:00:58.390442  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.390453  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:58.390462  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:58.390525  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:58.426976  152077 cri.go:89] found id: ""
	I0729 19:00:58.427004  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.427023  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:58.427031  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:58.427091  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:58.460492  152077 cri.go:89] found id: ""
	I0729 19:00:58.460528  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.460537  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:58.460545  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:58.460608  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:58.495894  152077 cri.go:89] found id: ""
	I0729 19:00:58.495930  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.495942  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:58.495950  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:58.496030  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:58.530710  152077 cri.go:89] found id: ""
	I0729 19:00:58.530739  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.530750  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:58.530762  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:58.530779  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:58.607469  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:58.607515  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:58.646982  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:58.647016  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:58.698304  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:58.698356  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:58.713370  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:58.713398  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:58.786858  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:56.945381  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:59.444570  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:01.446580  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:00:59.340211  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:01.340333  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:01.287427  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:01.301239  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:01.301316  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:01.337329  152077 cri.go:89] found id: ""
	I0729 19:01:01.337357  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.337368  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:01.337376  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:01.337440  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:01.375796  152077 cri.go:89] found id: ""
	I0729 19:01:01.375828  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.375836  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:01.375843  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:01.375904  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:01.408560  152077 cri.go:89] found id: ""
	I0729 19:01:01.408585  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.408594  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:01.408600  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:01.408658  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:01.443797  152077 cri.go:89] found id: ""
	I0729 19:01:01.443833  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.443841  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:01.443849  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:01.443909  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:01.478900  152077 cri.go:89] found id: ""
	I0729 19:01:01.478928  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.478941  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:01.478948  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:01.479014  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:01.512370  152077 cri.go:89] found id: ""
	I0729 19:01:01.512398  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.512407  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:01.512413  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:01.512463  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:01.546996  152077 cri.go:89] found id: ""
	I0729 19:01:01.547031  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.547042  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:01.547050  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:01.547113  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:01.581135  152077 cri.go:89] found id: ""
	I0729 19:01:01.581161  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.581169  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:01.581178  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:01.581194  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:01.595012  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:01.595042  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:01.670013  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:01.670034  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:01.670047  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:01.746304  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:01.746342  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:01.788085  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:01.788122  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:04.339966  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:04.353377  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:04.353447  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:04.386653  152077 cri.go:89] found id: ""
	I0729 19:01:04.386680  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.386691  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:04.386699  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:04.386763  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:04.420317  152077 cri.go:89] found id: ""
	I0729 19:01:04.420350  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.420360  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:04.420369  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:04.420436  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:04.454461  152077 cri.go:89] found id: ""
	I0729 19:01:04.454485  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.454495  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:04.454502  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:04.454562  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:04.487377  152077 cri.go:89] found id: ""
	I0729 19:01:04.487403  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.487415  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:04.487423  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:04.487489  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:04.520888  152077 cri.go:89] found id: ""
	I0729 19:01:04.520914  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.520924  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:04.520930  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:04.520982  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:04.554321  152077 cri.go:89] found id: ""
	I0729 19:01:04.554345  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.554354  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:04.554361  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:04.554427  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:03.946020  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:05.947069  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:03.840132  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:06.339566  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:08.339853  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:04.593894  152077 cri.go:89] found id: ""
	I0729 19:01:04.593926  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.593937  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:04.593945  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:04.594013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:04.627113  152077 cri.go:89] found id: ""
	I0729 19:01:04.627140  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.627148  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:04.627158  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:04.627170  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:04.678099  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:04.678134  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:04.692096  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:04.692125  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:04.763388  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:04.763414  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:04.763432  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:04.842745  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:04.842774  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:07.384259  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:07.397933  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:07.398000  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:07.443262  152077 cri.go:89] found id: ""
	I0729 19:01:07.443289  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.443300  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:07.443308  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:07.443365  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:07.477719  152077 cri.go:89] found id: ""
	I0729 19:01:07.477749  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.477764  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:07.477771  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:07.477835  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:07.512037  152077 cri.go:89] found id: ""
	I0729 19:01:07.512062  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.512071  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:07.512077  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:07.512134  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:07.554189  152077 cri.go:89] found id: ""
	I0729 19:01:07.554223  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.554234  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:07.554242  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:07.554307  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:07.588508  152077 cri.go:89] found id: ""
	I0729 19:01:07.588540  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.588551  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:07.588559  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:07.588631  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:07.622139  152077 cri.go:89] found id: ""
	I0729 19:01:07.622164  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.622176  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:07.622184  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:07.622254  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:07.656573  152077 cri.go:89] found id: ""
	I0729 19:01:07.656607  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.656619  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:07.656627  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:07.656695  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:07.694720  152077 cri.go:89] found id: ""
	I0729 19:01:07.694748  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.694759  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:07.694770  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:07.694787  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:07.762272  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:07.762294  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:07.762311  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:07.843424  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:07.843456  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:07.880999  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:07.881035  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:07.932111  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:07.932143  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:08.445354  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:10.445644  151772 pod_ready.go:102] pod "metrics-server-569cc877fc-shvq4" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:12.799226  139862 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0729 19:01:12.799307  139862 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:01:12.801129  139862 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:01:12.801189  139862 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:01:12.801277  139862 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:01:12.801363  139862 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:01:12.801445  139862 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:01:12.801514  139862 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:01:12.803097  139862 out.go:204]   - Generating certificates and keys ...
	I0729 19:01:12.803250  139862 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:01:12.803300  139862 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:01:12.803421  139862 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:01:12.803504  139862 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:01:12.803595  139862 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:01:12.803643  139862 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:01:12.803715  139862 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:01:12.803777  139862 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:01:12.803864  139862 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:01:12.803946  139862 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:01:12.803977  139862 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:01:12.804027  139862 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:01:12.804073  139862 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:01:12.804137  139862 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:01:12.804208  139862 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:01:12.804259  139862 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:01:12.804329  139862 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:01:12.804428  139862 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:01:12.804491  139862 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:01:10.840131  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:12.842996  151436 pod_ready.go:102] pod "metrics-server-78fcd8795b-tscl7" in "kube-system" namespace has status "Ready":"False"
	I0729 19:01:12.805920  139862 out.go:204]   - Booting up control plane ...
	I0729 19:01:12.806001  139862 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:01:12.806072  139862 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:01:12.806124  139862 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:01:12.806205  139862 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:01:12.806284  139862 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:01:12.806330  139862 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:01:12.806470  139862 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:01:12.806534  139862 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:01:12.806578  139862 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.020303ms
	I0729 19:01:12.806631  139862 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:01:12.806690  139862 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000040263s
	I0729 19:01:12.806693  139862 kubeadm.go:310] 
	I0729 19:01:12.806722  139862 kubeadm.go:310] Unfortunately, an error has occurred:
	I0729 19:01:12.806744  139862 kubeadm.go:310] 	context deadline exceeded
	I0729 19:01:12.806747  139862 kubeadm.go:310] 
	I0729 19:01:12.806772  139862 kubeadm.go:310] This error is likely caused by:
	I0729 19:01:12.806795  139862 kubeadm.go:310] 	- The kubelet is not running
	I0729 19:01:12.806885  139862 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:01:12.806892  139862 kubeadm.go:310] 
	I0729 19:01:12.806998  139862 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:01:12.807028  139862 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0729 19:01:12.807064  139862 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0729 19:01:12.807067  139862 kubeadm.go:310] 
	I0729 19:01:12.807155  139862 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:01:12.807226  139862 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:01:12.807302  139862 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0729 19:01:12.807379  139862 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:01:12.807441  139862 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0729 19:01:12.807555  139862 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:01:12.807599  139862 kubeadm.go:394] duration metric: took 12m20.17963328s to StartCluster
	I0729 19:01:12.807636  139862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:12.807694  139862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:12.855581  139862 cri.go:89] found id: "324fce8cf128205679b140bf081c8bec93b3647c3956bfa96ad9b84424333074"
	I0729 19:01:12.855594  139862 cri.go:89] found id: ""
	I0729 19:01:12.855605  139862 logs.go:276] 1 containers: [324fce8cf128205679b140bf081c8bec93b3647c3956bfa96ad9b84424333074]
	I0729 19:01:12.855663  139862 ssh_runner.go:195] Run: which crictl
	I0729 19:01:12.860264  139862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:12.860310  139862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:12.896569  139862 cri.go:89] found id: ""
	I0729 19:01:12.896587  139862 logs.go:276] 0 containers: []
	W0729 19:01:12.896595  139862 logs.go:278] No container was found matching "etcd"
	I0729 19:01:12.896601  139862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:12.896667  139862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:12.931612  139862 cri.go:89] found id: ""
	I0729 19:01:12.931628  139862 logs.go:276] 0 containers: []
	W0729 19:01:12.931637  139862 logs.go:278] No container was found matching "coredns"
	I0729 19:01:12.931643  139862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:12.931704  139862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:12.967640  139862 cri.go:89] found id: "549b70dfebf294b988e5b2b5c2a530a7ac6d338ea2129d36ca7fd93081293382"
	I0729 19:01:12.967652  139862 cri.go:89] found id: ""
	I0729 19:01:12.967661  139862 logs.go:276] 1 containers: [549b70dfebf294b988e5b2b5c2a530a7ac6d338ea2129d36ca7fd93081293382]
	I0729 19:01:12.967723  139862 ssh_runner.go:195] Run: which crictl
	I0729 19:01:12.971831  139862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:12.971883  139862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:13.006181  139862 cri.go:89] found id: ""
	I0729 19:01:13.006199  139862 logs.go:276] 0 containers: []
	W0729 19:01:13.006208  139862 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:13.006215  139862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:13.006273  139862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:13.046172  139862 cri.go:89] found id: ""
	I0729 19:01:13.046192  139862 logs.go:276] 0 containers: []
	W0729 19:01:13.046202  139862 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:13.046208  139862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:13.046259  139862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:13.083726  139862 cri.go:89] found id: ""
	I0729 19:01:13.083750  139862 logs.go:276] 0 containers: []
	W0729 19:01:13.083758  139862 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:13.083764  139862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 19:01:13.083821  139862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 19:01:13.125407  139862 cri.go:89] found id: ""
	I0729 19:01:13.125422  139862 logs.go:276] 0 containers: []
	W0729 19:01:13.125430  139862 logs.go:278] No container was found matching "storage-provisioner"
	I0729 19:01:13.125440  139862 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:13.125454  139862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:13.209826  139862 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:13.209842  139862 logs.go:123] Gathering logs for kube-apiserver [324fce8cf128205679b140bf081c8bec93b3647c3956bfa96ad9b84424333074] ...
	I0729 19:01:13.209858  139862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 324fce8cf128205679b140bf081c8bec93b3647c3956bfa96ad9b84424333074"
	I0729 19:01:13.249172  139862 logs.go:123] Gathering logs for kube-scheduler [549b70dfebf294b988e5b2b5c2a530a7ac6d338ea2129d36ca7fd93081293382] ...
	I0729 19:01:13.249191  139862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 549b70dfebf294b988e5b2b5c2a530a7ac6d338ea2129d36ca7fd93081293382"
	I0729 19:01:13.327426  139862 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:13.327447  139862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:13.535747  139862 logs.go:123] Gathering logs for container status ...
	I0729 19:01:13.535768  139862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:13.592597  139862 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:13.592637  139862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:13.762592  139862 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:13.762612  139862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:01:13.781546  139862 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.020303ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000040263s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:01:13.781580  139862 out.go:239] * 
	W0729 19:01:13.781696  139862 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.020303ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000040263s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:01:13.781732  139862 out.go:239] * 
	W0729 19:01:13.782952  139862 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:01:13.786222  139862 out.go:177] 
	W0729 19:01:13.787703  139862 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.020303ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000040263s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:01:13.787743  139862 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:01:13.787768  139862 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:01:13.789238  139862 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.531335258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722279674531305613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb55e427-ab6f-41b3-aed0-5dee010b6288 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.531989199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3489ed8a-dca7-4192-91b6-d1becfa8ebd1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.532060010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3489ed8a-dca7-4192-91b6-d1becfa8ebd1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.532136459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:324fce8cf128205679b140bf081c8bec93b3647c3956bfa96ad9b84424333074,PodSandboxId:b8d40af6dadd8e5cb9a0ca4affbf3dc6140bc7e2a0d4c813b069b7bf94b960f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279608543263167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-974855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba837e13406d9d02f3d4da5e7097682,},Annotations:map[string]string{io.kubernetes.container.hash: 29b34c03,io.kubernetes.container.restartCount: 15,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549b70dfebf294b988e5b2b5c2a530a7ac6d338ea2129d36ca7fd93081293382,PodSandboxId:2c5a548467c7f7b92e51fa853c0e2af4efa1d57db95c7f1d2e14171ea712b685,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722279433171927826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-974855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3627cf8a29dbc4189f13ffd1c0fdacdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3489ed8a-dca7-4192-91b6-d1becfa8ebd1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.564588918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1621b3b-e169-4083-ae68-9f4dd296d0b4 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.564655748Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1621b3b-e169-4083-ae68-9f4dd296d0b4 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.565831553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f20fd815-52ad-4987-a221-e98c5bb9a7c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.566184744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722279674566162729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f20fd815-52ad-4987-a221-e98c5bb9a7c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.566581250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efe378d6-dfbb-4982-9ba4-49c20f02f53c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.566647525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efe378d6-dfbb-4982-9ba4-49c20f02f53c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.566834245Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:324fce8cf128205679b140bf081c8bec93b3647c3956bfa96ad9b84424333074,PodSandboxId:b8d40af6dadd8e5cb9a0ca4affbf3dc6140bc7e2a0d4c813b069b7bf94b960f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279608543263167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-974855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba837e13406d9d02f3d4da5e7097682,},Annotations:map[string]string{io.kubernetes.container.hash: 29b34c03,io.kubernetes.container.restartCount: 15,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549b70dfebf294b988e5b2b5c2a530a7ac6d338ea2129d36ca7fd93081293382,PodSandboxId:2c5a548467c7f7b92e51fa853c0e2af4efa1d57db95c7f1d2e14171ea712b685,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722279433171927826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-974855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3627cf8a29dbc4189f13ffd1c0fdacdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efe378d6-dfbb-4982-9ba4-49c20f02f53c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.601069009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48183272-253e-4e82-93ad-44e2b2ac0f63 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.601155307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48183272-253e-4e82-93ad-44e2b2ac0f63 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.602649446Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8b5c64e-8eaf-4ba2-90ba-a58f6308de77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.603117259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722279674603093360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8b5c64e-8eaf-4ba2-90ba-a58f6308de77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.603926975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8279eb67-57b0-4e09-9ebb-2ae19b1bc0ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.603980205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8279eb67-57b0-4e09-9ebb-2ae19b1bc0ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.604081420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:324fce8cf128205679b140bf081c8bec93b3647c3956bfa96ad9b84424333074,PodSandboxId:b8d40af6dadd8e5cb9a0ca4affbf3dc6140bc7e2a0d4c813b069b7bf94b960f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279608543263167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-974855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba837e13406d9d02f3d4da5e7097682,},Annotations:map[string]string{io.kubernetes.container.hash: 29b34c03,io.kubernetes.container.restartCount: 15,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549b70dfebf294b988e5b2b5c2a530a7ac6d338ea2129d36ca7fd93081293382,PodSandboxId:2c5a548467c7f7b92e51fa853c0e2af4efa1d57db95c7f1d2e14171ea712b685,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722279433171927826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-974855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3627cf8a29dbc4189f13ffd1c0fdacdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8279eb67-57b0-4e09-9ebb-2ae19b1bc0ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.639501434Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b750205-1136-4713-a711-09900d33c94c name=/runtime.v1.RuntimeService/Version
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.639615464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b750205-1136-4713-a711-09900d33c94c name=/runtime.v1.RuntimeService/Version
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.641122254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=122e5082-5f31-45c4-8ecc-5aef9ac4e3e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.641450881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722279674641433735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=122e5082-5f31-45c4-8ecc-5aef9ac4e3e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.642029063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=923e83b4-360d-41a7-815c-1cbf8da79be4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.642097846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=923e83b4-360d-41a7-815c-1cbf8da79be4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:01:14 cert-expiration-974855 crio[2894]: time="2024-07-29 19:01:14.642174730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:324fce8cf128205679b140bf081c8bec93b3647c3956bfa96ad9b84424333074,PodSandboxId:b8d40af6dadd8e5cb9a0ca4affbf3dc6140bc7e2a0d4c813b069b7bf94b960f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279608543263167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-974855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ba837e13406d9d02f3d4da5e7097682,},Annotations:map[string]string{io.kubernetes.container.hash: 29b34c03,io.kubernetes.container.restartCount: 15,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549b70dfebf294b988e5b2b5c2a530a7ac6d338ea2129d36ca7fd93081293382,PodSandboxId:2c5a548467c7f7b92e51fa853c0e2af4efa1d57db95c7f1d2e14171ea712b685,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722279433171927826,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-974855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3627cf8a29dbc4189f13ffd1c0fdacdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 4,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=923e83b4-360d-41a7-815c-1cbf8da79be4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                ATTEMPT             POD ID              POD
	324fce8cf1282       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   About a minute ago   Exited              kube-apiserver      15                  b8d40af6dadd8       kube-apiserver-cert-expiration-974855
	549b70dfebf29       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   4 minutes ago        Running             kube-scheduler      4                   2c5a548467c7f       kube-scheduler-cert-expiration-974855
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.182814] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.172268] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.295735] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.474697] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.070407] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.642864] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +0.066782] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.304100] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[  +0.083786] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.331276] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[Jul29 18:44] kauditd_printk_skb: 49 callbacks suppressed
	[ +31.925103] kauditd_printk_skb: 59 callbacks suppressed
	[Jul29 18:47] systemd-fstab-generator[2633]: Ignoring "noauto" option for root device
	[  +0.258454] systemd-fstab-generator[2688]: Ignoring "noauto" option for root device
	[  +0.320920] systemd-fstab-generator[2725]: Ignoring "noauto" option for root device
	[  +0.283072] systemd-fstab-generator[2737]: Ignoring "noauto" option for root device
	[  +0.410513] systemd-fstab-generator[2765]: Ignoring "noauto" option for root device
	[Jul29 18:48] systemd-fstab-generator[3006]: Ignoring "noauto" option for root device
	[  +0.103901] kauditd_printk_skb: 182 callbacks suppressed
	[  +3.066805] systemd-fstab-generator[3129]: Ignoring "noauto" option for root device
	[Jul29 18:49] kauditd_printk_skb: 70 callbacks suppressed
	[Jul29 18:53] systemd-fstab-generator[9260]: Ignoring "noauto" option for root device
	[ +22.596056] kauditd_printk_skb: 68 callbacks suppressed
	[Jul29 18:57] systemd-fstab-generator[10851]: Ignoring "noauto" option for root device
	[ +22.490061] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> kernel <==
	 19:01:14 up 17 min,  0 users,  load average: 0.00, 0.10, 0.12
	Linux cert-expiration-974855 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [324fce8cf128205679b140bf081c8bec93b3647c3956bfa96ad9b84424333074] <==
	I0729 19:00:08.719285       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0729 19:00:09.168687       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:09.169361       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 19:00:09.169458       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 19:00:09.179952       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 19:00:09.184777       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 19:00:09.184809       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 19:00:09.184958       1 instance.go:299] Using reconciler: lease
	W0729 19:00:09.185602       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:10.169422       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:10.170025       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:10.186854       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:11.765310       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:11.883011       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:12.020134       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:14.213502       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:14.498473       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:14.799611       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:18.530092       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:19.076028       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:19.181081       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:24.547632       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:25.628383       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:00:25.815089       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0729 19:00:29.185458       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-scheduler [549b70dfebf294b988e5b2b5c2a530a7ac6d338ea2129d36ca7fd93081293382] <==
	E0729 19:00:41.012904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.104:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:00:45.794135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.50.104:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:00:45.794187       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.50.104:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:00:48.446684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.50.104:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:00:48.446805       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.50.104:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:00:50.398402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.50.104:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:00:50.398477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.50.104:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:00:51.937050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.50.104:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:00:51.937093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.50.104:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:00:54.554423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.50.104:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:00:54.554492       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.50.104:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:00:57.819597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.50.104:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:00:57.819682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.50.104:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:00:58.663422       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.104:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:00:58.663462       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.104:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:01:00.540426       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.104:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:01:00.540515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.104:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:01:06.058057       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.50.104:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:01:06.058129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.50.104:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:01:08.237453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.50.104:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:01:08.237505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.50.104:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:01:08.803411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.50.104:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:01:08.803450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.50.104:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	W0729 19:01:13.748992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.50.104:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	E0729 19:01:13.749066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.50.104:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	
	
	==> kubelet <==
	Jul 29 19:01:00 cert-expiration-974855 kubelet[10858]: E0729 19:01:00.067637   10858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-974855?timeout=10s\": dial tcp 192.168.50.104:8443: connect: connection refused" interval="7s"
	Jul 29 19:01:02 cert-expiration-974855 kubelet[10858]: I0729 19:01:02.531581   10858 scope.go:117] "RemoveContainer" containerID="324fce8cf128205679b140bf081c8bec93b3647c3956bfa96ad9b84424333074"
	Jul 29 19:01:02 cert-expiration-974855 kubelet[10858]: E0729 19:01:02.532410   10858 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-cert-expiration-974855_kube-system(2ba837e13406d9d02f3d4da5e7097682)\"" pod="kube-system/kube-apiserver-cert-expiration-974855" podUID="2ba837e13406d9d02f3d4da5e7097682"
	Jul 29 19:01:02 cert-expiration-974855 kubelet[10858]: E0729 19:01:02.581237   10858 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"cert-expiration-974855\" not found"
	Jul 29 19:01:05 cert-expiration-974855 kubelet[10858]: E0729 19:01:05.538062   10858 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-cert-expiration-974855_kube-system_1a1d91902dbb0c7a61002c438d8da2af_1\" is already in use by ad5a1ff573525a3f35cd98b4b5bb0dea5345e5ecbf8f3369cd7234551b74472e. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="d0ea8f9bcde1a96bb476cbcd50c5c5880f21873cb71ac48b6a563dba3daa0bc1"
	Jul 29 19:01:05 cert-expiration-974855 kubelet[10858]: E0729 19:01:05.538217   10858 kuberuntime_manager.go:1256] container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.12-0,Command:[etcd --advertise-client-urls=https://192.168.50.104:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.50.104:2380 --initial-cluster=cert-expiration-974855=https://192.168.50.104:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.50.104:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.50.104:2380 --name=cert-expiration-974855 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
--proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?exclude=NOSPACE&serializable=true,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSecon
ds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?serializable=false,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-cert-expiration-974855_kube-system(1a1d91902dbb0c7a61002c438d8da2af): CreateContainerError: the container name "k8s_etcd_etcd-cert-expiration-974855_kube-system_1a1d91902dbb0c7a61002c438d8da2af_1" is already in use by ad5a1ff573525a3f35cd98b4b5bb0dea5345e5ecbf8f3369cd7234551b74472e. You have to remove that container to be able to reuse tha
t name: that name is already in use
	Jul 29 19:01:05 cert-expiration-974855 kubelet[10858]: E0729 19:01:05.538263   10858 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-cert-expiration-974855_kube-system_1a1d91902dbb0c7a61002c438d8da2af_1\\\" is already in use by ad5a1ff573525a3f35cd98b4b5bb0dea5345e5ecbf8f3369cd7234551b74472e. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-cert-expiration-974855" podUID="1a1d91902dbb0c7a61002c438d8da2af"
	Jul 29 19:01:06 cert-expiration-974855 kubelet[10858]: I0729 19:01:06.099692   10858 kubelet_node_status.go:73] "Attempting to register node" node="cert-expiration-974855"
	Jul 29 19:01:06 cert-expiration-974855 kubelet[10858]: E0729 19:01:06.101166   10858 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.104:8443: connect: connection refused" node="cert-expiration-974855"
	Jul 29 19:01:07 cert-expiration-974855 kubelet[10858]: E0729 19:01:07.069096   10858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-974855?timeout=10s\": dial tcp 192.168.50.104:8443: connect: connection refused" interval="7s"
	Jul 29 19:01:09 cert-expiration-974855 kubelet[10858]: E0729 19:01:09.291647   10858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.104:8443: connect: connection refused" event="&Event{ObjectMeta:{cert-expiration-974855.17e6c404d4994ccc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:cert-expiration-974855,UID:cert-expiration-974855,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node cert-expiration-974855 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:cert-expiration-974855,},FirstTimestamp:2024-07-29 18:57:12.541523148 +0000 UTC m=+0.289354433,LastTimestamp:2024-07-29 18:57:12.541523148 +0000 UTC m=+0.289354433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingControll
er:kubelet,ReportingInstance:cert-expiration-974855,}"
	Jul 29 19:01:12 cert-expiration-974855 kubelet[10858]: E0729 19:01:12.551866   10858 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:01:12 cert-expiration-974855 kubelet[10858]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:01:12 cert-expiration-974855 kubelet[10858]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:01:12 cert-expiration-974855 kubelet[10858]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:01:12 cert-expiration-974855 kubelet[10858]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:01:12 cert-expiration-974855 kubelet[10858]: E0729 19:01:12.582196   10858 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"cert-expiration-974855\" not found"
	Jul 29 19:01:13 cert-expiration-974855 kubelet[10858]: I0729 19:01:13.102870   10858 kubelet_node_status.go:73] "Attempting to register node" node="cert-expiration-974855"
	Jul 29 19:01:13 cert-expiration-974855 kubelet[10858]: E0729 19:01:13.103648   10858 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.104:8443: connect: connection refused" node="cert-expiration-974855"
	Jul 29 19:01:13 cert-expiration-974855 kubelet[10858]: E0729 19:01:13.544935   10858 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-974855_kube-system_579d84406aff3462c49f64fe6febb489_1\" is already in use by 4a77943aef68b297e8b6e7b0381dc99b87906922ed5056b5143578d536487711. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="d8ea3a2a55a2c52415fae88611180253911bbd3785bf2966c49b6d422e633502"
	Jul 29 19:01:13 cert-expiration-974855 kubelet[10858]: E0729 19:01:13.545172   10858 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.3,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-accoun
t-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},Liv
enessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-cert-expiration-974855_kube
-system(579d84406aff3462c49f64fe6febb489): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-cert-expiration-974855_kube-system_579d84406aff3462c49f64fe6febb489_1" is already in use by 4a77943aef68b297e8b6e7b0381dc99b87906922ed5056b5143578d536487711. You have to remove that container to be able to reuse that name: that name is already in use
	Jul 29 19:01:13 cert-expiration-974855 kubelet[10858]: E0729 19:01:13.545223   10858 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-cert-expiration-974855_kube-system_579d84406aff3462c49f64fe6febb489_1\\\" is already in use by 4a77943aef68b297e8b6e7b0381dc99b87906922ed5056b5143578d536487711. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-cert-expiration-974855" podUID="579d84406aff3462c49f64fe6febb489"
	Jul 29 19:01:14 cert-expiration-974855 kubelet[10858]: E0729 19:01:14.071189   10858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-974855?timeout=10s\": dial tcp 192.168.50.104:8443: connect: connection refused" interval="7s"
	Jul 29 19:01:14 cert-expiration-974855 kubelet[10858]: W0729 19:01:14.901104   10858 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	Jul 29 19:01:14 cert-expiration-974855 kubelet[10858]: E0729 19:01:14.901199   10858 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.104:8443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-974855 -n cert-expiration-974855
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-974855 -n cert-expiration-974855: exit status 2 (228.431881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "cert-expiration-974855" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-974855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-974855
--- FAIL: TestCertExpiration (1103.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 node stop m02 -v=7 --alsologtostderr
E0729 17:53:59.866089   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:54:40.827037   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.455003154s)

                                                
                                                
-- stdout --
	* Stopping node "ha-794405-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:53:43.189789  109688 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:53:43.190200  109688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:53:43.190215  109688 out.go:304] Setting ErrFile to fd 2...
	I0729 17:53:43.190222  109688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:53:43.190629  109688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:53:43.191186  109688 mustload.go:65] Loading cluster: ha-794405
	I0729 17:53:43.191634  109688 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:53:43.191666  109688 stop.go:39] StopHost: ha-794405-m02
	I0729 17:53:43.192031  109688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:53:43.192074  109688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:53:43.208545  109688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0729 17:53:43.209085  109688 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:53:43.209752  109688 main.go:141] libmachine: Using API Version  1
	I0729 17:53:43.209793  109688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:53:43.210187  109688 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:53:43.212701  109688 out.go:177] * Stopping node "ha-794405-m02"  ...
	I0729 17:53:43.214112  109688 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 17:53:43.214141  109688 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:53:43.214378  109688 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 17:53:43.214401  109688 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:53:43.217166  109688 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:53:43.217583  109688 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:53:43.217613  109688 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:53:43.217757  109688 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:53:43.217935  109688 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:53:43.218100  109688 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:53:43.218232  109688 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	I0729 17:53:43.299787  109688 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 17:53:43.352750  109688 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 17:53:43.408642  109688 main.go:141] libmachine: Stopping "ha-794405-m02"...
	I0729 17:53:43.408672  109688 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:53:43.410261  109688 main.go:141] libmachine: (ha-794405-m02) Calling .Stop
	I0729 17:53:43.413690  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 0/120
	I0729 17:53:44.415100  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 1/120
	I0729 17:53:45.416495  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 2/120
	I0729 17:53:46.418130  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 3/120
	I0729 17:53:47.419633  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 4/120
	I0729 17:53:48.421599  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 5/120
	I0729 17:53:49.423359  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 6/120
	I0729 17:53:50.425237  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 7/120
	I0729 17:53:51.426516  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 8/120
	I0729 17:53:52.428039  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 9/120
	I0729 17:53:53.430157  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 10/120
	I0729 17:53:54.431742  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 11/120
	I0729 17:53:55.433173  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 12/120
	I0729 17:53:56.434557  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 13/120
	I0729 17:53:57.436320  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 14/120
	I0729 17:53:58.438043  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 15/120
	I0729 17:53:59.440288  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 16/120
	I0729 17:54:00.441605  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 17/120
	I0729 17:54:01.442979  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 18/120
	I0729 17:54:02.444540  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 19/120
	I0729 17:54:03.446680  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 20/120
	I0729 17:54:04.448009  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 21/120
	I0729 17:54:05.449271  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 22/120
	I0729 17:54:06.451211  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 23/120
	I0729 17:54:07.452411  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 24/120
	I0729 17:54:08.453785  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 25/120
	I0729 17:54:09.455196  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 26/120
	I0729 17:54:10.456489  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 27/120
	I0729 17:54:11.457737  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 28/120
	I0729 17:54:12.459316  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 29/120
	I0729 17:54:13.461316  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 30/120
	I0729 17:54:14.462538  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 31/120
	I0729 17:54:15.464153  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 32/120
	I0729 17:54:16.465552  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 33/120
	I0729 17:54:17.467259  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 34/120
	I0729 17:54:18.469146  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 35/120
	I0729 17:54:19.470557  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 36/120
	I0729 17:54:20.472127  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 37/120
	I0729 17:54:21.473429  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 38/120
	I0729 17:54:22.475482  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 39/120
	I0729 17:54:23.477193  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 40/120
	I0729 17:54:24.479476  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 41/120
	I0729 17:54:25.481334  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 42/120
	I0729 17:54:26.483581  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 43/120
	I0729 17:54:27.484907  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 44/120
	I0729 17:54:28.486931  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 45/120
	I0729 17:54:29.488199  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 46/120
	I0729 17:54:30.489918  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 47/120
	I0729 17:54:31.491265  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 48/120
	I0729 17:54:32.492472  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 49/120
	I0729 17:54:33.494070  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 50/120
	I0729 17:54:34.496141  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 51/120
	I0729 17:54:35.497607  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 52/120
	I0729 17:54:36.498998  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 53/120
	I0729 17:54:37.500458  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 54/120
	I0729 17:54:38.502230  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 55/120
	I0729 17:54:39.504026  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 56/120
	I0729 17:54:40.505447  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 57/120
	I0729 17:54:41.507500  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 58/120
	I0729 17:54:42.508913  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 59/120
	I0729 17:54:43.510967  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 60/120
	I0729 17:54:44.512188  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 61/120
	I0729 17:54:45.513547  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 62/120
	I0729 17:54:46.515450  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 63/120
	I0729 17:54:47.516734  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 64/120
	I0729 17:54:48.518949  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 65/120
	I0729 17:54:49.520417  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 66/120
	I0729 17:54:50.521782  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 67/120
	I0729 17:54:51.523269  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 68/120
	I0729 17:54:52.524540  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 69/120
	I0729 17:54:53.526668  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 70/120
	I0729 17:54:54.528223  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 71/120
	I0729 17:54:55.529608  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 72/120
	I0729 17:54:56.531276  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 73/120
	I0729 17:54:57.533200  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 74/120
	I0729 17:54:58.535060  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 75/120
	I0729 17:54:59.536526  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 76/120
	I0729 17:55:00.538366  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 77/120
	I0729 17:55:01.539830  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 78/120
	I0729 17:55:02.541288  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 79/120
	I0729 17:55:03.543198  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 80/120
	I0729 17:55:04.544496  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 81/120
	I0729 17:55:05.546818  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 82/120
	I0729 17:55:06.548181  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 83/120
	I0729 17:55:07.549778  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 84/120
	I0729 17:55:08.551708  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 85/120
	I0729 17:55:09.552987  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 86/120
	I0729 17:55:10.554272  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 87/120
	I0729 17:55:11.555565  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 88/120
	I0729 17:55:12.556963  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 89/120
	I0729 17:55:13.559080  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 90/120
	I0729 17:55:14.560483  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 91/120
	I0729 17:55:15.561936  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 92/120
	I0729 17:55:16.563255  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 93/120
	I0729 17:55:17.564634  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 94/120
	I0729 17:55:18.566550  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 95/120
	I0729 17:55:19.568146  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 96/120
	I0729 17:55:20.569734  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 97/120
	I0729 17:55:21.571058  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 98/120
	I0729 17:55:22.572313  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 99/120
	I0729 17:55:23.574070  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 100/120
	I0729 17:55:24.575464  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 101/120
	I0729 17:55:25.576904  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 102/120
	I0729 17:55:26.578065  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 103/120
	I0729 17:55:27.579433  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 104/120
	I0729 17:55:28.581401  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 105/120
	I0729 17:55:29.583261  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 106/120
	I0729 17:55:30.584538  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 107/120
	I0729 17:55:31.585874  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 108/120
	I0729 17:55:32.587142  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 109/120
	I0729 17:55:33.588522  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 110/120
	I0729 17:55:34.589983  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 111/120
	I0729 17:55:35.591156  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 112/120
	I0729 17:55:36.592669  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 113/120
	I0729 17:55:37.594265  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 114/120
	I0729 17:55:38.596116  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 115/120
	I0729 17:55:39.597526  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 116/120
	I0729 17:55:40.598787  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 117/120
	I0729 17:55:41.600697  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 118/120
	I0729 17:55:42.602499  109688 main.go:141] libmachine: (ha-794405-m02) Waiting for machine to stop 119/120
	I0729 17:55:43.603834  109688 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 17:55:43.603969  109688 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-794405 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
E0729 17:55:53.334456   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr: exit status 3 (19.011336249s)

                                                
                                                
-- stdout --
	ha-794405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-794405-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:55:43.652327  110104 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:55:43.652594  110104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:55:43.652606  110104 out.go:304] Setting ErrFile to fd 2...
	I0729 17:55:43.652611  110104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:55:43.652772  110104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:55:43.652959  110104 out.go:298] Setting JSON to false
	I0729 17:55:43.652987  110104 mustload.go:65] Loading cluster: ha-794405
	I0729 17:55:43.653125  110104 notify.go:220] Checking for updates...
	I0729 17:55:43.653475  110104 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:55:43.653498  110104 status.go:255] checking status of ha-794405 ...
	I0729 17:55:43.654049  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:55:43.654112  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:55:43.676794  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41795
	I0729 17:55:43.677264  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:55:43.677868  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:55:43.677894  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:55:43.678254  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:55:43.678437  110104 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:55:43.680044  110104 status.go:330] ha-794405 host status = "Running" (err=<nil>)
	I0729 17:55:43.680063  110104 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:55:43.680356  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:55:43.680408  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:55:43.695728  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33771
	I0729 17:55:43.696061  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:55:43.696565  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:55:43.696608  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:55:43.696940  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:55:43.697146  110104 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:55:43.699881  110104 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:55:43.700367  110104 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:55:43.700396  110104 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:55:43.700576  110104 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:55:43.700989  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:55:43.701042  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:55:43.716422  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39545
	I0729 17:55:43.716811  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:55:43.717238  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:55:43.717257  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:55:43.717588  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:55:43.717775  110104 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:55:43.717990  110104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:55:43.718018  110104 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:55:43.720619  110104 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:55:43.721073  110104 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:55:43.721103  110104 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:55:43.721259  110104 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:55:43.721406  110104 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:55:43.721512  110104 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:55:43.721671  110104 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:55:43.807459  110104 ssh_runner.go:195] Run: systemctl --version
	I0729 17:55:43.815427  110104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:55:43.833382  110104 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:55:43.833413  110104 api_server.go:166] Checking apiserver status ...
	I0729 17:55:43.833463  110104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:55:43.850263  110104 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0729 17:55:43.860626  110104 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:55:43.860673  110104 ssh_runner.go:195] Run: ls
	I0729 17:55:43.865001  110104 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:55:43.869411  110104 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:55:43.869439  110104 status.go:422] ha-794405 apiserver status = Running (err=<nil>)
	I0729 17:55:43.869452  110104 status.go:257] ha-794405 status: &{Name:ha-794405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:55:43.869485  110104 status.go:255] checking status of ha-794405-m02 ...
	I0729 17:55:43.869826  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:55:43.869872  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:55:43.886758  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39935
	I0729 17:55:43.887228  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:55:43.887730  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:55:43.887751  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:55:43.888076  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:55:43.888269  110104 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:55:43.890074  110104 status.go:330] ha-794405-m02 host status = "Running" (err=<nil>)
	I0729 17:55:43.890093  110104 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:55:43.890436  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:55:43.890490  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:55:43.905636  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0729 17:55:43.906124  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:55:43.906611  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:55:43.906628  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:55:43.906982  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:55:43.907179  110104 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:55:43.909722  110104 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:55:43.910226  110104 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:55:43.910244  110104 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:55:43.910407  110104 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:55:43.910715  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:55:43.910755  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:55:43.925118  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0729 17:55:43.925494  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:55:43.925946  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:55:43.925966  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:55:43.926295  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:55:43.926492  110104 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:55:43.926690  110104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:55:43.926709  110104 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:55:43.929452  110104 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:55:43.929891  110104 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:55:43.929917  110104 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:55:43.930033  110104 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:55:43.930195  110104 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:55:43.930337  110104 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:55:43.930476  110104 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	W0729 17:56:02.249065  110104 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	W0729 17:56:02.249186  110104 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	E0729 17:56:02.249206  110104 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:02.249214  110104 status.go:257] ha-794405-m02 status: &{Name:ha-794405-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:56:02.249234  110104 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:02.249241  110104 status.go:255] checking status of ha-794405-m03 ...
	I0729 17:56:02.249739  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:02.249800  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:02.264800  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34289
	I0729 17:56:02.265239  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:02.265770  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:56:02.265791  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:02.266117  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:02.266326  110104 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:56:02.267892  110104 status.go:330] ha-794405-m03 host status = "Running" (err=<nil>)
	I0729 17:56:02.267915  110104 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:02.268203  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:02.268239  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:02.282673  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39465
	I0729 17:56:02.283078  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:02.283489  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:56:02.283507  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:02.283895  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:02.284091  110104 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:56:02.286840  110104 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:02.287234  110104 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:02.287265  110104 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:02.287388  110104 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:02.287769  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:02.287813  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:02.302055  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33243
	I0729 17:56:02.302403  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:02.302877  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:56:02.302900  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:02.303168  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:02.303356  110104 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:56:02.303539  110104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:02.303559  110104 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:56:02.306109  110104 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:02.306507  110104 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:02.306533  110104 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:02.306702  110104 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:56:02.306866  110104 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:56:02.307022  110104 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:56:02.307150  110104 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:56:02.389984  110104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:02.409118  110104 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:02.409157  110104 api_server.go:166] Checking apiserver status ...
	I0729 17:56:02.409201  110104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:02.427592  110104 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 17:56:02.441395  110104 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:02.441440  110104 ssh_runner.go:195] Run: ls
	I0729 17:56:02.445713  110104 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:02.451273  110104 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:02.451297  110104 status.go:422] ha-794405-m03 apiserver status = Running (err=<nil>)
	I0729 17:56:02.451308  110104 status.go:257] ha-794405-m03 status: &{Name:ha-794405-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:02.451331  110104 status.go:255] checking status of ha-794405-m04 ...
	I0729 17:56:02.451673  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:02.451718  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:02.467217  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40815
	I0729 17:56:02.467682  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:02.468173  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:56:02.468195  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:02.468533  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:02.468758  110104 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:56:02.470542  110104 status.go:330] ha-794405-m04 host status = "Running" (err=<nil>)
	I0729 17:56:02.470561  110104 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:02.470980  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:02.471025  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:02.486191  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0729 17:56:02.486604  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:02.487006  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:56:02.487033  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:02.487344  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:02.487535  110104 main.go:141] libmachine: (ha-794405-m04) Calling .GetIP
	I0729 17:56:02.490387  110104 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:02.490823  110104 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:02.490858  110104 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:02.491012  110104 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:02.491454  110104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:02.491506  110104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:02.505850  110104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0729 17:56:02.506268  110104 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:02.506709  110104 main.go:141] libmachine: Using API Version  1
	I0729 17:56:02.506728  110104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:02.507041  110104 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:02.507225  110104 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 17:56:02.507412  110104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:02.507434  110104 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 17:56:02.510356  110104 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:02.510803  110104 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:02.510840  110104 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:02.511025  110104 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 17:56:02.511179  110104 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 17:56:02.511347  110104 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 17:56:02.511484  110104 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 17:56:02.597983  110104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:02.615023  110104 status.go:257] ha-794405-m04 status: &{Name:ha-794405-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-794405 -n ha-794405
E0729 17:56:02.747716   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-794405 logs -n 25: (1.419664892s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1800704997/001/cp-test_ha-794405-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405:/home/docker/cp-test_ha-794405-m03_ha-794405.txt                       |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405 sudo cat                                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m03_ha-794405.txt                                 |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m02:/home/docker/cp-test_ha-794405-m03_ha-794405-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m02 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m03_ha-794405-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04:/home/docker/cp-test_ha-794405-m03_ha-794405-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m04 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m03_ha-794405-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp testdata/cp-test.txt                                                | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1800704997/001/cp-test_ha-794405-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405:/home/docker/cp-test_ha-794405-m04_ha-794405.txt                       |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405 sudo cat                                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405.txt                                 |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m02:/home/docker/cp-test_ha-794405-m04_ha-794405-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m02 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03:/home/docker/cp-test_ha-794405-m04_ha-794405-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m03 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-794405 node stop m02 -v=7                                                     | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:49:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:49:02.826095  105708 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:49:02.826385  105708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:49:02.826396  105708 out.go:304] Setting ErrFile to fd 2...
	I0729 17:49:02.826400  105708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:49:02.826591  105708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:49:02.827147  105708 out.go:298] Setting JSON to false
	I0729 17:49:02.828119  105708 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9063,"bootTime":1722266280,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:49:02.828172  105708 start.go:139] virtualization: kvm guest
	I0729 17:49:02.830990  105708 out.go:177] * [ha-794405] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:49:02.832383  105708 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 17:49:02.832406  105708 notify.go:220] Checking for updates...
	I0729 17:49:02.834889  105708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:49:02.836265  105708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:49:02.837498  105708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:49:02.838698  105708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:49:02.839838  105708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:49:02.841175  105708 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:49:02.876993  105708 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 17:49:02.878394  105708 start.go:297] selected driver: kvm2
	I0729 17:49:02.878409  105708 start.go:901] validating driver "kvm2" against <nil>
	I0729 17:49:02.878421  105708 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:49:02.879446  105708 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:49:02.879522  105708 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:49:02.895099  105708 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:49:02.895149  105708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:49:02.895354  105708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:49:02.895408  105708 cni.go:84] Creating CNI manager for ""
	I0729 17:49:02.895419  105708 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 17:49:02.895426  105708 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 17:49:02.895481  105708 start.go:340] cluster config:
	{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 17:49:02.895575  105708 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:49:02.897380  105708 out.go:177] * Starting "ha-794405" primary control-plane node in "ha-794405" cluster
	I0729 17:49:02.898661  105708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:49:02.898696  105708 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:49:02.898706  105708 cache.go:56] Caching tarball of preloaded images
	I0729 17:49:02.898779  105708 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:49:02.898788  105708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:49:02.899135  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:49:02.899157  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json: {Name:mk30de7d0c2625e6321a17969a3dfd0d2dbdef3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:02.899281  105708 start.go:360] acquireMachinesLock for ha-794405: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:49:02.899307  105708 start.go:364] duration metric: took 14.682µs to acquireMachinesLock for "ha-794405"
	I0729 17:49:02.899323  105708 start.go:93] Provisioning new machine with config: &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:49:02.899386  105708 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 17:49:02.901032  105708 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:49:02.901232  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:49:02.901277  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:49:02.915591  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0729 17:49:02.916063  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:49:02.916573  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:49:02.916595  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:49:02.916895  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:49:02.917094  105708 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:49:02.917236  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:02.917370  105708 start.go:159] libmachine.API.Create for "ha-794405" (driver="kvm2")
	I0729 17:49:02.917400  105708 client.go:168] LocalClient.Create starting
	I0729 17:49:02.917445  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 17:49:02.917484  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:49:02.917508  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:49:02.917589  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 17:49:02.917627  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:49:02.917646  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:49:02.917668  105708 main.go:141] libmachine: Running pre-create checks...
	I0729 17:49:02.917687  105708 main.go:141] libmachine: (ha-794405) Calling .PreCreateCheck
	I0729 17:49:02.918008  105708 main.go:141] libmachine: (ha-794405) Calling .GetConfigRaw
	I0729 17:49:02.918405  105708 main.go:141] libmachine: Creating machine...
	I0729 17:49:02.918420  105708 main.go:141] libmachine: (ha-794405) Calling .Create
	I0729 17:49:02.918535  105708 main.go:141] libmachine: (ha-794405) Creating KVM machine...
	I0729 17:49:02.919868  105708 main.go:141] libmachine: (ha-794405) DBG | found existing default KVM network
	I0729 17:49:02.920566  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:02.920405  105731 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0729 17:49:02.920588  105708 main.go:141] libmachine: (ha-794405) DBG | created network xml: 
	I0729 17:49:02.920598  105708 main.go:141] libmachine: (ha-794405) DBG | <network>
	I0729 17:49:02.920608  105708 main.go:141] libmachine: (ha-794405) DBG |   <name>mk-ha-794405</name>
	I0729 17:49:02.920617  105708 main.go:141] libmachine: (ha-794405) DBG |   <dns enable='no'/>
	I0729 17:49:02.920627  105708 main.go:141] libmachine: (ha-794405) DBG |   
	I0729 17:49:02.920646  105708 main.go:141] libmachine: (ha-794405) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 17:49:02.920658  105708 main.go:141] libmachine: (ha-794405) DBG |     <dhcp>
	I0729 17:49:02.920668  105708 main.go:141] libmachine: (ha-794405) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 17:49:02.920680  105708 main.go:141] libmachine: (ha-794405) DBG |     </dhcp>
	I0729 17:49:02.920690  105708 main.go:141] libmachine: (ha-794405) DBG |   </ip>
	I0729 17:49:02.920701  105708 main.go:141] libmachine: (ha-794405) DBG |   
	I0729 17:49:02.920723  105708 main.go:141] libmachine: (ha-794405) DBG | </network>
	I0729 17:49:02.920736  105708 main.go:141] libmachine: (ha-794405) DBG | 
	I0729 17:49:02.925707  105708 main.go:141] libmachine: (ha-794405) DBG | trying to create private KVM network mk-ha-794405 192.168.39.0/24...
	I0729 17:49:02.992063  105708 main.go:141] libmachine: (ha-794405) DBG | private KVM network mk-ha-794405 192.168.39.0/24 created
	I0729 17:49:02.992097  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:02.992044  105731 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:49:02.992105  105708 main.go:141] libmachine: (ha-794405) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405 ...
	I0729 17:49:02.992115  105708 main.go:141] libmachine: (ha-794405) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 17:49:02.992160  105708 main.go:141] libmachine: (ha-794405) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 17:49:03.246791  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:03.246674  105731 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa...
	I0729 17:49:03.734433  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:03.734328  105731 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/ha-794405.rawdisk...
	I0729 17:49:03.734464  105708 main.go:141] libmachine: (ha-794405) DBG | Writing magic tar header
	I0729 17:49:03.734492  105708 main.go:141] libmachine: (ha-794405) DBG | Writing SSH key tar header
	I0729 17:49:03.734510  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:03.734433  105731 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405 ...
	I0729 17:49:03.734542  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405
	I0729 17:49:03.734564  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 17:49:03.734577  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405 (perms=drwx------)
	I0729 17:49:03.734588  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:49:03.734599  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 17:49:03.734605  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:49:03.734616  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:49:03.734621  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home
	I0729 17:49:03.734630  105708 main.go:141] libmachine: (ha-794405) DBG | Skipping /home - not owner
	I0729 17:49:03.734653  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:49:03.734689  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 17:49:03.734702  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 17:49:03.734708  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:49:03.734716  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:49:03.734724  105708 main.go:141] libmachine: (ha-794405) Creating domain...
	I0729 17:49:03.735727  105708 main.go:141] libmachine: (ha-794405) define libvirt domain using xml: 
	I0729 17:49:03.735748  105708 main.go:141] libmachine: (ha-794405) <domain type='kvm'>
	I0729 17:49:03.735756  105708 main.go:141] libmachine: (ha-794405)   <name>ha-794405</name>
	I0729 17:49:03.735767  105708 main.go:141] libmachine: (ha-794405)   <memory unit='MiB'>2200</memory>
	I0729 17:49:03.735775  105708 main.go:141] libmachine: (ha-794405)   <vcpu>2</vcpu>
	I0729 17:49:03.735786  105708 main.go:141] libmachine: (ha-794405)   <features>
	I0729 17:49:03.735794  105708 main.go:141] libmachine: (ha-794405)     <acpi/>
	I0729 17:49:03.735804  105708 main.go:141] libmachine: (ha-794405)     <apic/>
	I0729 17:49:03.735810  105708 main.go:141] libmachine: (ha-794405)     <pae/>
	I0729 17:49:03.735824  105708 main.go:141] libmachine: (ha-794405)     
	I0729 17:49:03.735831  105708 main.go:141] libmachine: (ha-794405)   </features>
	I0729 17:49:03.735836  105708 main.go:141] libmachine: (ha-794405)   <cpu mode='host-passthrough'>
	I0729 17:49:03.735871  105708 main.go:141] libmachine: (ha-794405)   
	I0729 17:49:03.735896  105708 main.go:141] libmachine: (ha-794405)   </cpu>
	I0729 17:49:03.735912  105708 main.go:141] libmachine: (ha-794405)   <os>
	I0729 17:49:03.735923  105708 main.go:141] libmachine: (ha-794405)     <type>hvm</type>
	I0729 17:49:03.735936  105708 main.go:141] libmachine: (ha-794405)     <boot dev='cdrom'/>
	I0729 17:49:03.735946  105708 main.go:141] libmachine: (ha-794405)     <boot dev='hd'/>
	I0729 17:49:03.735958  105708 main.go:141] libmachine: (ha-794405)     <bootmenu enable='no'/>
	I0729 17:49:03.735967  105708 main.go:141] libmachine: (ha-794405)   </os>
	I0729 17:49:03.735985  105708 main.go:141] libmachine: (ha-794405)   <devices>
	I0729 17:49:03.736005  105708 main.go:141] libmachine: (ha-794405)     <disk type='file' device='cdrom'>
	I0729 17:49:03.736030  105708 main.go:141] libmachine: (ha-794405)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/boot2docker.iso'/>
	I0729 17:49:03.736050  105708 main.go:141] libmachine: (ha-794405)       <target dev='hdc' bus='scsi'/>
	I0729 17:49:03.736063  105708 main.go:141] libmachine: (ha-794405)       <readonly/>
	I0729 17:49:03.736073  105708 main.go:141] libmachine: (ha-794405)     </disk>
	I0729 17:49:03.736085  105708 main.go:141] libmachine: (ha-794405)     <disk type='file' device='disk'>
	I0729 17:49:03.736097  105708 main.go:141] libmachine: (ha-794405)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:49:03.736108  105708 main.go:141] libmachine: (ha-794405)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/ha-794405.rawdisk'/>
	I0729 17:49:03.736117  105708 main.go:141] libmachine: (ha-794405)       <target dev='hda' bus='virtio'/>
	I0729 17:49:03.736133  105708 main.go:141] libmachine: (ha-794405)     </disk>
	I0729 17:49:03.736151  105708 main.go:141] libmachine: (ha-794405)     <interface type='network'>
	I0729 17:49:03.736164  105708 main.go:141] libmachine: (ha-794405)       <source network='mk-ha-794405'/>
	I0729 17:49:03.736175  105708 main.go:141] libmachine: (ha-794405)       <model type='virtio'/>
	I0729 17:49:03.736186  105708 main.go:141] libmachine: (ha-794405)     </interface>
	I0729 17:49:03.736196  105708 main.go:141] libmachine: (ha-794405)     <interface type='network'>
	I0729 17:49:03.736207  105708 main.go:141] libmachine: (ha-794405)       <source network='default'/>
	I0729 17:49:03.736221  105708 main.go:141] libmachine: (ha-794405)       <model type='virtio'/>
	I0729 17:49:03.736231  105708 main.go:141] libmachine: (ha-794405)     </interface>
	I0729 17:49:03.736241  105708 main.go:141] libmachine: (ha-794405)     <serial type='pty'>
	I0729 17:49:03.736252  105708 main.go:141] libmachine: (ha-794405)       <target port='0'/>
	I0729 17:49:03.736271  105708 main.go:141] libmachine: (ha-794405)     </serial>
	I0729 17:49:03.736284  105708 main.go:141] libmachine: (ha-794405)     <console type='pty'>
	I0729 17:49:03.736298  105708 main.go:141] libmachine: (ha-794405)       <target type='serial' port='0'/>
	I0729 17:49:03.736310  105708 main.go:141] libmachine: (ha-794405)     </console>
	I0729 17:49:03.736321  105708 main.go:141] libmachine: (ha-794405)     <rng model='virtio'>
	I0729 17:49:03.736334  105708 main.go:141] libmachine: (ha-794405)       <backend model='random'>/dev/random</backend>
	I0729 17:49:03.736343  105708 main.go:141] libmachine: (ha-794405)     </rng>
	I0729 17:49:03.736352  105708 main.go:141] libmachine: (ha-794405)     
	I0729 17:49:03.736358  105708 main.go:141] libmachine: (ha-794405)     
	I0729 17:49:03.736373  105708 main.go:141] libmachine: (ha-794405)   </devices>
	I0729 17:49:03.736389  105708 main.go:141] libmachine: (ha-794405) </domain>
	I0729 17:49:03.736406  105708 main.go:141] libmachine: (ha-794405) 
	I0729 17:49:03.740482  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:99:46:a4 in network default
	I0729 17:49:03.741062  105708 main.go:141] libmachine: (ha-794405) Ensuring networks are active...
	I0729 17:49:03.741080  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:03.741701  105708 main.go:141] libmachine: (ha-794405) Ensuring network default is active
	I0729 17:49:03.741942  105708 main.go:141] libmachine: (ha-794405) Ensuring network mk-ha-794405 is active
	I0729 17:49:03.742356  105708 main.go:141] libmachine: (ha-794405) Getting domain xml...
	I0729 17:49:03.743032  105708 main.go:141] libmachine: (ha-794405) Creating domain...
	I0729 17:49:04.055804  105708 main.go:141] libmachine: (ha-794405) Waiting to get IP...
	I0729 17:49:04.056778  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:04.057192  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:04.057231  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:04.057178  105731 retry.go:31] will retry after 205.96088ms: waiting for machine to come up
	I0729 17:49:04.264556  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:04.264963  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:04.264989  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:04.264940  105731 retry.go:31] will retry after 324.704809ms: waiting for machine to come up
	I0729 17:49:04.591370  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:04.591845  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:04.591872  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:04.591792  105731 retry.go:31] will retry after 405.573536ms: waiting for machine to come up
	I0729 17:49:04.999287  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:04.999748  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:04.999774  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:04.999721  105731 retry.go:31] will retry after 496.871109ms: waiting for machine to come up
	I0729 17:49:05.498405  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:05.498773  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:05.498810  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:05.498722  105731 retry.go:31] will retry after 510.903666ms: waiting for machine to come up
	I0729 17:49:06.011952  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:06.012359  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:06.012382  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:06.012319  105731 retry.go:31] will retry after 664.645855ms: waiting for machine to come up
	I0729 17:49:06.678052  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:06.678400  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:06.678431  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:06.678381  105731 retry.go:31] will retry after 1.124585448s: waiting for machine to come up
	I0729 17:49:07.804662  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:07.805145  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:07.805191  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:07.805120  105731 retry.go:31] will retry after 1.146972901s: waiting for machine to come up
	I0729 17:49:08.953966  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:08.954310  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:08.954343  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:08.954253  105731 retry.go:31] will retry after 1.280729444s: waiting for machine to come up
	I0729 17:49:10.236121  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:10.236479  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:10.236519  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:10.236446  105731 retry.go:31] will retry after 1.647758504s: waiting for machine to come up
	I0729 17:49:11.886214  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:11.886687  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:11.886718  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:11.886628  105731 retry.go:31] will retry after 2.347847077s: waiting for machine to come up
	I0729 17:49:14.235798  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:14.236227  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:14.236269  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:14.236189  105731 retry.go:31] will retry after 2.690373484s: waiting for machine to come up
	I0729 17:49:16.929828  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:16.930286  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:16.930313  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:16.930236  105731 retry.go:31] will retry after 3.511637453s: waiting for machine to come up
	I0729 17:49:20.445378  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:20.445822  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:20.445846  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:20.445780  105731 retry.go:31] will retry after 5.302806771s: waiting for machine to come up
	I0729 17:49:25.751979  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:25.752379  105708 main.go:141] libmachine: (ha-794405) Found IP for machine: 192.168.39.102
	I0729 17:49:25.752399  105708 main.go:141] libmachine: (ha-794405) Reserving static IP address...
	I0729 17:49:25.752413  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has current primary IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:25.752726  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find host DHCP lease matching {name: "ha-794405", mac: "52:54:00:a5:77:cc", ip: "192.168.39.102"} in network mk-ha-794405
	I0729 17:49:25.824491  105708 main.go:141] libmachine: (ha-794405) Reserved static IP address: 192.168.39.102
	I0729 17:49:25.824523  105708 main.go:141] libmachine: (ha-794405) Waiting for SSH to be available...
	I0729 17:49:25.824534  105708 main.go:141] libmachine: (ha-794405) DBG | Getting to WaitForSSH function...
	I0729 17:49:25.827117  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:25.827416  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405
	I0729 17:49:25.827444  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find defined IP address of network mk-ha-794405 interface with MAC address 52:54:00:a5:77:cc
	I0729 17:49:25.827525  105708 main.go:141] libmachine: (ha-794405) DBG | Using SSH client type: external
	I0729 17:49:25.827549  105708 main.go:141] libmachine: (ha-794405) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa (-rw-------)
	I0729 17:49:25.827610  105708 main.go:141] libmachine: (ha-794405) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:49:25.827638  105708 main.go:141] libmachine: (ha-794405) DBG | About to run SSH command:
	I0729 17:49:25.827655  105708 main.go:141] libmachine: (ha-794405) DBG | exit 0
	I0729 17:49:25.831028  105708 main.go:141] libmachine: (ha-794405) DBG | SSH cmd err, output: exit status 255: 
	I0729 17:49:25.831053  105708 main.go:141] libmachine: (ha-794405) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 17:49:25.831064  105708 main.go:141] libmachine: (ha-794405) DBG | command : exit 0
	I0729 17:49:25.831075  105708 main.go:141] libmachine: (ha-794405) DBG | err     : exit status 255
	I0729 17:49:25.831089  105708 main.go:141] libmachine: (ha-794405) DBG | output  : 
	I0729 17:49:28.831960  105708 main.go:141] libmachine: (ha-794405) DBG | Getting to WaitForSSH function...
	I0729 17:49:28.834145  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:28.834488  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:28.834515  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:28.834643  105708 main.go:141] libmachine: (ha-794405) DBG | Using SSH client type: external
	I0729 17:49:28.834681  105708 main.go:141] libmachine: (ha-794405) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa (-rw-------)
	I0729 17:49:28.834717  105708 main.go:141] libmachine: (ha-794405) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:49:28.834733  105708 main.go:141] libmachine: (ha-794405) DBG | About to run SSH command:
	I0729 17:49:28.834747  105708 main.go:141] libmachine: (ha-794405) DBG | exit 0
	I0729 17:49:28.956655  105708 main.go:141] libmachine: (ha-794405) DBG | SSH cmd err, output: <nil>: 
	I0729 17:49:28.956903  105708 main.go:141] libmachine: (ha-794405) KVM machine creation complete!
	I0729 17:49:28.957190  105708 main.go:141] libmachine: (ha-794405) Calling .GetConfigRaw
	I0729 17:49:28.957795  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:28.957991  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:28.958136  105708 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:49:28.958148  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:49:28.959385  105708 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:49:28.959398  105708 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:49:28.959405  105708 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:49:28.959410  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:28.961561  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:28.961911  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:28.961932  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:28.962100  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:28.962260  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:28.962441  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:28.962594  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:28.962762  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:28.962948  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:28.962957  105708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:49:29.063890  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:49:29.063917  105708 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:49:29.063927  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.066506  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.066824  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.066855  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.066976  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:29.067164  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.067335  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.067471  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:29.067652  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:29.067852  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:29.067867  105708 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:49:29.169299  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:49:29.169418  105708 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:49:29.169442  105708 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:49:29.169472  105708 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:49:29.169723  105708 buildroot.go:166] provisioning hostname "ha-794405"
	I0729 17:49:29.169753  105708 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:49:29.169967  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.172330  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.172670  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.172692  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.172838  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:29.173021  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.173179  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.173313  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:29.173456  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:29.173621  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:29.173634  105708 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-794405 && echo "ha-794405" | sudo tee /etc/hostname
	I0729 17:49:29.287535  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-794405
	
	I0729 17:49:29.287562  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.290362  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.290718  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.290748  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.290888  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:29.291060  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.291260  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.291385  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:29.291529  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:29.291732  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:29.291756  105708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-794405' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-794405/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-794405' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:49:29.401468  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:49:29.401496  105708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 17:49:29.401551  105708 buildroot.go:174] setting up certificates
	I0729 17:49:29.401563  105708 provision.go:84] configureAuth start
	I0729 17:49:29.401574  105708 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:49:29.401886  105708 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:49:29.404405  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.404737  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.404759  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.404925  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.407032  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.407332  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.407354  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.407459  105708 provision.go:143] copyHostCerts
	I0729 17:49:29.407490  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:49:29.407538  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 17:49:29.407547  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:49:29.407623  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 17:49:29.407745  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:49:29.407776  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 17:49:29.407785  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:49:29.407821  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 17:49:29.407923  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:49:29.407949  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 17:49:29.407959  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:49:29.407994  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 17:49:29.408061  105708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.ha-794405 san=[127.0.0.1 192.168.39.102 ha-794405 localhost minikube]
	I0729 17:49:29.582277  105708 provision.go:177] copyRemoteCerts
	I0729 17:49:29.582350  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:49:29.582379  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.584803  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.585095  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.585120  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.585246  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:29.585386  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.585595  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:29.585742  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:49:29.666289  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:49:29.666361  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:49:29.689260  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:49:29.689314  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 17:49:29.711383  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:49:29.711435  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:49:29.733565  105708 provision.go:87] duration metric: took 331.99164ms to configureAuth
	I0729 17:49:29.733587  105708 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:49:29.733753  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:49:29.733831  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.736447  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.736759  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.736789  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.736969  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:29.737139  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.737314  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.737459  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:29.737632  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:29.737790  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:29.737809  105708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:49:30.018714  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:49:30.018744  105708 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:49:30.018754  105708 main.go:141] libmachine: (ha-794405) Calling .GetURL
	I0729 17:49:30.020110  105708 main.go:141] libmachine: (ha-794405) DBG | Using libvirt version 6000000
	I0729 17:49:30.022350  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.022691  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.022708  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.022868  105708 main.go:141] libmachine: Docker is up and running!
	I0729 17:49:30.022891  105708 main.go:141] libmachine: Reticulating splines...
	I0729 17:49:30.022899  105708 client.go:171] duration metric: took 27.10548559s to LocalClient.Create
	I0729 17:49:30.022921  105708 start.go:167] duration metric: took 27.10555277s to libmachine.API.Create "ha-794405"
	I0729 17:49:30.022934  105708 start.go:293] postStartSetup for "ha-794405" (driver="kvm2")
	I0729 17:49:30.022954  105708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:49:30.022976  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:30.023222  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:49:30.023253  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:30.025417  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.025743  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.025766  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.025928  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:30.026124  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:30.026283  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:30.026433  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:49:30.106738  105708 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:49:30.110753  105708 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:49:30.110784  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 17:49:30.110834  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 17:49:30.110921  105708 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 17:49:30.110935  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /etc/ssl/certs/952822.pem
	I0729 17:49:30.111028  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:49:30.119685  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:49:30.141793  105708 start.go:296] duration metric: took 118.84673ms for postStartSetup
	I0729 17:49:30.141841  105708 main.go:141] libmachine: (ha-794405) Calling .GetConfigRaw
	I0729 17:49:30.142370  105708 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:49:30.145013  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.145400  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.145424  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.145667  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:49:30.145826  105708 start.go:128] duration metric: took 27.246430846s to createHost
	I0729 17:49:30.145848  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:30.147850  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.148123  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.148148  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.148271  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:30.148560  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:30.148723  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:30.148896  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:30.149066  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:30.149290  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:30.149302  105708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:49:30.249133  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275370.230035605
	
	I0729 17:49:30.249158  105708 fix.go:216] guest clock: 1722275370.230035605
	I0729 17:49:30.249167  105708 fix.go:229] Guest: 2024-07-29 17:49:30.230035605 +0000 UTC Remote: 2024-07-29 17:49:30.145838608 +0000 UTC m=+27.355399708 (delta=84.196997ms)
	I0729 17:49:30.249187  105708 fix.go:200] guest clock delta is within tolerance: 84.196997ms
	I0729 17:49:30.249192  105708 start.go:83] releasing machines lock for "ha-794405", held for 27.349876645s
	I0729 17:49:30.249218  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:30.249490  105708 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:49:30.251823  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.252165  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.252199  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.252383  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:30.252876  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:30.253056  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:30.253278  105708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:49:30.253347  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:30.253283  105708 ssh_runner.go:195] Run: cat /version.json
	I0729 17:49:30.253410  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:30.256015  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.256305  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.256459  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.256482  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.256586  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:30.256608  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.256626  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.256763  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:30.256788  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:30.256949  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:30.256960  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:30.257138  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:30.257128  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:49:30.257295  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:49:30.355760  105708 ssh_runner.go:195] Run: systemctl --version
	I0729 17:49:30.361465  105708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:49:30.513451  105708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:49:30.520499  105708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:49:30.520676  105708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:49:30.537130  105708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:49:30.537153  105708 start.go:495] detecting cgroup driver to use...
	I0729 17:49:30.537214  105708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:49:30.552663  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:49:30.566114  105708 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:49:30.566173  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:49:30.579553  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:49:30.593111  105708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:49:30.699759  105708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:49:30.852778  105708 docker.go:233] disabling docker service ...
	I0729 17:49:30.852877  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:49:30.866825  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:49:30.879979  105708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:49:31.005064  105708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:49:31.128047  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:49:31.141567  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:49:31.159589  105708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:49:31.159659  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.169610  105708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:49:31.169667  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.179745  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.190025  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.200741  105708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:49:31.211700  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.222206  105708 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.239291  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.249523  105708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:49:31.258669  105708 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:49:31.258723  105708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:49:31.270748  105708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:49:31.279746  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:49:31.398598  105708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:49:31.541730  105708 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:49:31.541811  105708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:49:31.547368  105708 start.go:563] Will wait 60s for crictl version
	I0729 17:49:31.547425  105708 ssh_runner.go:195] Run: which crictl
	I0729 17:49:31.551142  105708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:49:31.591665  105708 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:49:31.591752  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:49:31.618720  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:49:31.651924  105708 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:49:31.653214  105708 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:49:31.655590  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:31.655858  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:31.655888  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:31.656049  105708 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:49:31.660141  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:49:31.673311  105708 kubeadm.go:883] updating cluster {Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 17:49:31.673412  105708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:49:31.673451  105708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:49:31.705175  105708 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 17:49:31.705232  105708 ssh_runner.go:195] Run: which lz4
	I0729 17:49:31.708904  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 17:49:31.708980  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 17:49:31.712793  105708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 17:49:31.712821  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 17:49:33.062759  105708 crio.go:462] duration metric: took 1.353792868s to copy over tarball
	I0729 17:49:33.062838  105708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 17:49:35.126738  105708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.063865498s)
	I0729 17:49:35.126767  105708 crio.go:469] duration metric: took 2.06397882s to extract the tarball
	I0729 17:49:35.126776  105708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 17:49:35.164202  105708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:49:35.210338  105708 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:49:35.210361  105708 cache_images.go:84] Images are preloaded, skipping loading
	I0729 17:49:35.210369  105708 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0729 17:49:35.210476  105708 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-794405 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:49:35.210543  105708 ssh_runner.go:195] Run: crio config
	I0729 17:49:35.260156  105708 cni.go:84] Creating CNI manager for ""
	I0729 17:49:35.260185  105708 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 17:49:35.260197  105708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:49:35.260224  105708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-794405 NodeName:ha-794405 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:49:35.260425  105708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-794405"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:49:35.260458  105708 kube-vip.go:115] generating kube-vip config ...
	I0729 17:49:35.260531  105708 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:49:35.276684  105708 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:49:35.276790  105708 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:49:35.276849  105708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:49:35.286712  105708 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:49:35.286768  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 17:49:35.295906  105708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 17:49:35.311637  105708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:49:35.327414  105708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 17:49:35.343043  105708 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 17:49:35.359377  105708 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:49:35.363291  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:49:35.375704  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:49:35.489508  105708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:49:35.505445  105708 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405 for IP: 192.168.39.102
	I0729 17:49:35.505475  105708 certs.go:194] generating shared ca certs ...
	I0729 17:49:35.505496  105708 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:35.505692  105708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 17:49:35.505757  105708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 17:49:35.505772  105708 certs.go:256] generating profile certs ...
	I0729 17:49:35.505836  105708 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key
	I0729 17:49:35.505853  105708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt with IP's: []
	I0729 17:49:35.801022  105708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt ...
	I0729 17:49:35.801067  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt: {Name:mkb8dd0c0c2d582f5ff5bb1fee374e0e6a310340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:35.801267  105708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key ...
	I0729 17:49:35.801285  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key: {Name:mkd4acd873e144301116c0340b52fa7490e94eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:35.801393  105708 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.778a7140
	I0729 17:49:35.801412  105708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.778a7140 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I0729 17:49:35.924277  105708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.778a7140 ...
	I0729 17:49:35.924310  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.778a7140: {Name:mk0d46e1c11a2b050eaf1c974c78ccbcd4025fec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:35.924476  105708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.778a7140 ...
	I0729 17:49:35.924493  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.778a7140: {Name:mk1d617f4ecae50f4a793285b8a14d10a8917d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:35.924595  105708 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.778a7140 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt
	I0729 17:49:35.924715  105708 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.778a7140 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key
	I0729 17:49:35.924798  105708 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key
	I0729 17:49:35.924820  105708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt with IP's: []
	I0729 17:49:36.012100  105708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt ...
	I0729 17:49:36.012133  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt: {Name:mk5dfd47a29e68c44b7150fb205a8b9651147a30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:36.012301  105708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key ...
	I0729 17:49:36.012317  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key: {Name:mk9cf98a9eefaddd0bc8e7780f0dd63ef76e3e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:36.012411  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:49:36.012434  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:49:36.012450  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:49:36.012466  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:49:36.012482  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:49:36.012501  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:49:36.012519  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:49:36.012544  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:49:36.012608  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 17:49:36.012657  105708 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 17:49:36.012671  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:49:36.012707  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:49:36.012740  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:49:36.012774  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 17:49:36.012832  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:49:36.012914  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem -> /usr/share/ca-certificates/95282.pem
	I0729 17:49:36.012941  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /usr/share/ca-certificates/952822.pem
	I0729 17:49:36.012958  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:49:36.013570  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:49:36.039509  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:49:36.063983  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:49:36.088319  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:49:36.112761  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 17:49:36.136058  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:49:36.159574  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:49:36.182816  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:49:36.205661  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 17:49:36.228619  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 17:49:36.251370  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:49:36.276742  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:49:36.296764  105708 ssh_runner.go:195] Run: openssl version
	I0729 17:49:36.307903  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 17:49:36.324801  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 17:49:36.330455  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 17:49:36.330518  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 17:49:36.336947  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:49:36.347667  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:49:36.358240  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:49:36.362571  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:49:36.362645  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:49:36.368228  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:49:36.380400  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 17:49:36.391176  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 17:49:36.395579  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 17:49:36.395643  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 17:49:36.401174  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 17:49:36.413631  105708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:49:36.417843  105708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:49:36.417890  105708 kubeadm.go:392] StartCluster: {Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:49:36.417959  105708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 17:49:36.417994  105708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 17:49:36.458098  105708 cri.go:89] found id: ""
	I0729 17:49:36.458164  105708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 17:49:36.468928  105708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 17:49:36.479175  105708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 17:49:36.490020  105708 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 17:49:36.490036  105708 kubeadm.go:157] found existing configuration files:
	
	I0729 17:49:36.490101  105708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 17:49:36.498544  105708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 17:49:36.498591  105708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 17:49:36.507258  105708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 17:49:36.515477  105708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 17:49:36.515537  105708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 17:49:36.524129  105708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 17:49:36.532366  105708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 17:49:36.532404  105708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 17:49:36.540843  105708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 17:49:36.548912  105708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 17:49:36.548950  105708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 17:49:36.557445  105708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 17:49:36.784593  105708 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 17:49:47.860353  105708 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 17:49:47.860432  105708 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 17:49:47.860544  105708 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 17:49:47.860678  105708 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 17:49:47.860804  105708 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 17:49:47.860923  105708 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 17:49:47.862388  105708 out.go:204]   - Generating certificates and keys ...
	I0729 17:49:47.862465  105708 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 17:49:47.862522  105708 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 17:49:47.862596  105708 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 17:49:47.862648  105708 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 17:49:47.862719  105708 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 17:49:47.862814  105708 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 17:49:47.862884  105708 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 17:49:47.863008  105708 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-794405 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0729 17:49:47.863066  105708 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 17:49:47.863176  105708 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-794405 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0729 17:49:47.863233  105708 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 17:49:47.863291  105708 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 17:49:47.863329  105708 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 17:49:47.863425  105708 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 17:49:47.863496  105708 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 17:49:47.863544  105708 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 17:49:47.863600  105708 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 17:49:47.863681  105708 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 17:49:47.863757  105708 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 17:49:47.863885  105708 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 17:49:47.863946  105708 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 17:49:47.865259  105708 out.go:204]   - Booting up control plane ...
	I0729 17:49:47.865344  105708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 17:49:47.865408  105708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 17:49:47.865467  105708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 17:49:47.865576  105708 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 17:49:47.865708  105708 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 17:49:47.865773  105708 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 17:49:47.865925  105708 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 17:49:47.866016  105708 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 17:49:47.866078  105708 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.688229ms
	I0729 17:49:47.866141  105708 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 17:49:47.866191  105708 kubeadm.go:310] [api-check] The API server is healthy after 5.882551655s
	I0729 17:49:47.866307  105708 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 17:49:47.866494  105708 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 17:49:47.866568  105708 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 17:49:47.866717  105708 kubeadm.go:310] [mark-control-plane] Marking the node ha-794405 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 17:49:47.866792  105708 kubeadm.go:310] [bootstrap-token] Using token: f793nk.j9zxoiw0utdua39g
	I0729 17:49:47.868137  105708 out.go:204]   - Configuring RBAC rules ...
	I0729 17:49:47.868241  105708 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 17:49:47.868310  105708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 17:49:47.868428  105708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 17:49:47.868535  105708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 17:49:47.868674  105708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 17:49:47.868804  105708 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 17:49:47.868972  105708 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 17:49:47.869022  105708 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 17:49:47.869061  105708 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 17:49:47.869067  105708 kubeadm.go:310] 
	I0729 17:49:47.869116  105708 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 17:49:47.869122  105708 kubeadm.go:310] 
	I0729 17:49:47.869199  105708 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 17:49:47.869213  105708 kubeadm.go:310] 
	I0729 17:49:47.869258  105708 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 17:49:47.869309  105708 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 17:49:47.869352  105708 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 17:49:47.869362  105708 kubeadm.go:310] 
	I0729 17:49:47.869405  105708 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 17:49:47.869411  105708 kubeadm.go:310] 
	I0729 17:49:47.869453  105708 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 17:49:47.869459  105708 kubeadm.go:310] 
	I0729 17:49:47.869505  105708 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 17:49:47.869569  105708 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 17:49:47.869626  105708 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 17:49:47.869632  105708 kubeadm.go:310] 
	I0729 17:49:47.869702  105708 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 17:49:47.869765  105708 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 17:49:47.869771  105708 kubeadm.go:310] 
	I0729 17:49:47.869865  105708 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f793nk.j9zxoiw0utdua39g \
	I0729 17:49:47.869991  105708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f \
	I0729 17:49:47.870023  105708 kubeadm.go:310] 	--control-plane 
	I0729 17:49:47.870035  105708 kubeadm.go:310] 
	I0729 17:49:47.870146  105708 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 17:49:47.870155  105708 kubeadm.go:310] 
	I0729 17:49:47.870255  105708 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f793nk.j9zxoiw0utdua39g \
	I0729 17:49:47.870352  105708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f 
	I0729 17:49:47.870363  105708 cni.go:84] Creating CNI manager for ""
	I0729 17:49:47.870369  105708 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 17:49:47.871961  105708 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 17:49:47.873106  105708 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 17:49:47.878863  105708 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 17:49:47.878884  105708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 17:49:47.898096  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 17:49:48.273726  105708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 17:49:48.273833  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:48.273847  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-794405 minikube.k8s.io/updated_at=2024_07_29T17_49_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=ha-794405 minikube.k8s.io/primary=true
	I0729 17:49:48.392193  105708 ops.go:34] apiserver oom_adj: -16
	I0729 17:49:48.397487  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:48.898351  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:49.397647  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:49.898315  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:50.397596  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:50.898011  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:51.398316  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:51.897916  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:52.398054  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:52.898459  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:53.397571  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:53.898087  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:54.398327  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:54.898300  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:55.397773  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:55.897494  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:56.397574  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:56.897575  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:57.398162  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:57.898284  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:58.397866  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:58.897616  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:59.398293  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:59.897528  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:50:00.398548  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:50:00.898527  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:50:00.991403  105708 kubeadm.go:1113] duration metric: took 12.717647136s to wait for elevateKubeSystemPrivileges
	I0729 17:50:00.991435  105708 kubeadm.go:394] duration metric: took 24.573549363s to StartCluster
	I0729 17:50:00.991454  105708 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:50:00.991544  105708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:50:00.992360  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:50:00.992634  105708 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:50:00.992628  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 17:50:00.992674  105708 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 17:50:00.992734  105708 addons.go:69] Setting storage-provisioner=true in profile "ha-794405"
	I0729 17:50:00.992755  105708 addons.go:234] Setting addon storage-provisioner=true in "ha-794405"
	I0729 17:50:00.992658  105708 start.go:241] waiting for startup goroutines ...
	I0729 17:50:00.992781  105708 addons.go:69] Setting default-storageclass=true in profile "ha-794405"
	I0729 17:50:00.992798  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:50:00.992833  105708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-794405"
	I0729 17:50:00.992958  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:00.993262  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:00.993295  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:00.993308  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:00.993337  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:01.008821  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0729 17:50:01.008899  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I0729 17:50:01.009407  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.009474  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.009972  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.009979  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.009995  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.010024  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.010319  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.010325  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.010485  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:50:01.010906  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:01.010948  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:01.012581  105708 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:50:01.012953  105708 kapi.go:59] client config for ha-794405: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt", KeyFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key", CAFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 17:50:01.013422  105708 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 17:50:01.013585  105708 addons.go:234] Setting addon default-storageclass=true in "ha-794405"
	I0729 17:50:01.013629  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:50:01.013895  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:01.013932  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:01.025602  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40799
	I0729 17:50:01.025987  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.026483  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.026508  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.026866  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.027060  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:50:01.028244  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
	I0729 17:50:01.028631  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.028748  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:50:01.029562  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.029585  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.030995  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.031213  105708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:50:01.031542  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:01.031572  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:01.032627  105708 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:50:01.032648  105708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 17:50:01.032669  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:50:01.035387  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:01.035811  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:50:01.035839  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:01.036001  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:50:01.036171  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:50:01.036305  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:50:01.036437  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:50:01.046281  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46543
	I0729 17:50:01.046677  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.047085  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.047105  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.047404  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.047567  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:50:01.048792  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:50:01.048991  105708 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 17:50:01.049007  105708 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 17:50:01.049023  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:50:01.051432  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:01.051810  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:50:01.051838  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:01.051964  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:50:01.052128  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:50:01.052254  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:50:01.052371  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:50:01.160458  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 17:50:01.173405  105708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 17:50:01.227718  105708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:50:01.652915  105708 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 17:50:01.653003  105708 main.go:141] libmachine: Making call to close driver server
	I0729 17:50:01.653030  105708 main.go:141] libmachine: (ha-794405) Calling .Close
	I0729 17:50:01.653359  105708 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:50:01.653379  105708 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:50:01.653380  105708 main.go:141] libmachine: (ha-794405) DBG | Closing plugin on server side
	I0729 17:50:01.653388  105708 main.go:141] libmachine: Making call to close driver server
	I0729 17:50:01.653397  105708 main.go:141] libmachine: (ha-794405) Calling .Close
	I0729 17:50:01.653646  105708 main.go:141] libmachine: (ha-794405) DBG | Closing plugin on server side
	I0729 17:50:01.653679  105708 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:50:01.653687  105708 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:50:01.653818  105708 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 17:50:01.653830  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:01.653841  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:01.653847  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:01.662935  105708 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0729 17:50:01.663732  105708 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 17:50:01.663751  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:01.663762  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:01.663770  105708 round_trippers.go:473]     Content-Type: application/json
	I0729 17:50:01.663776  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:01.666393  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:50:01.666546  105708 main.go:141] libmachine: Making call to close driver server
	I0729 17:50:01.666564  105708 main.go:141] libmachine: (ha-794405) Calling .Close
	I0729 17:50:01.666814  105708 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:50:01.666834  105708 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:50:01.915772  105708 main.go:141] libmachine: Making call to close driver server
	I0729 17:50:01.915797  105708 main.go:141] libmachine: (ha-794405) Calling .Close
	I0729 17:50:01.916079  105708 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:50:01.916103  105708 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:50:01.916113  105708 main.go:141] libmachine: Making call to close driver server
	I0729 17:50:01.916122  105708 main.go:141] libmachine: (ha-794405) Calling .Close
	I0729 17:50:01.916200  105708 main.go:141] libmachine: (ha-794405) DBG | Closing plugin on server side
	I0729 17:50:01.916373  105708 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:50:01.916390  105708 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:50:01.918156  105708 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 17:50:01.919363  105708 addons.go:510] duration metric: took 926.694377ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 17:50:01.919397  105708 start.go:246] waiting for cluster config update ...
	I0729 17:50:01.919413  105708 start.go:255] writing updated cluster config ...
	I0729 17:50:01.921100  105708 out.go:177] 
	I0729 17:50:01.922789  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:01.922990  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:50:01.925081  105708 out.go:177] * Starting "ha-794405-m02" control-plane node in "ha-794405" cluster
	I0729 17:50:01.926217  105708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:50:01.926241  105708 cache.go:56] Caching tarball of preloaded images
	I0729 17:50:01.926333  105708 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:50:01.926344  105708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:50:01.926405  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:50:01.926557  105708 start.go:360] acquireMachinesLock for ha-794405-m02: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:50:01.926596  105708 start.go:364] duration metric: took 21.492µs to acquireMachinesLock for "ha-794405-m02"
	I0729 17:50:01.926624  105708 start.go:93] Provisioning new machine with config: &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:50:01.926695  105708 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 17:50:01.928252  105708 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:50:01.928329  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:01.928356  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:01.943467  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45805
	I0729 17:50:01.943900  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.944469  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.944498  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.944878  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.945184  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetMachineName
	I0729 17:50:01.945341  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:01.945526  105708 start.go:159] libmachine.API.Create for "ha-794405" (driver="kvm2")
	I0729 17:50:01.945554  105708 client.go:168] LocalClient.Create starting
	I0729 17:50:01.945600  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 17:50:01.945644  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:50:01.945664  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:50:01.945739  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 17:50:01.945767  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:50:01.945783  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:50:01.945809  105708 main.go:141] libmachine: Running pre-create checks...
	I0729 17:50:01.945826  105708 main.go:141] libmachine: (ha-794405-m02) Calling .PreCreateCheck
	I0729 17:50:01.946000  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetConfigRaw
	I0729 17:50:01.946430  105708 main.go:141] libmachine: Creating machine...
	I0729 17:50:01.946447  105708 main.go:141] libmachine: (ha-794405-m02) Calling .Create
	I0729 17:50:01.946563  105708 main.go:141] libmachine: (ha-794405-m02) Creating KVM machine...
	I0729 17:50:01.947893  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found existing default KVM network
	I0729 17:50:01.947995  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found existing private KVM network mk-ha-794405
	I0729 17:50:01.948183  105708 main.go:141] libmachine: (ha-794405-m02) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02 ...
	I0729 17:50:01.948210  105708 main.go:141] libmachine: (ha-794405-m02) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 17:50:01.948255  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:01.948157  106097 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:50:01.948341  105708 main.go:141] libmachine: (ha-794405-m02) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 17:50:02.206815  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:02.206692  106097 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa...
	I0729 17:50:02.429331  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:02.429205  106097 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/ha-794405-m02.rawdisk...
	I0729 17:50:02.429374  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Writing magic tar header
	I0729 17:50:02.429389  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Writing SSH key tar header
	I0729 17:50:02.429401  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:02.429357  106097 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02 ...
	I0729 17:50:02.429523  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02
	I0729 17:50:02.429579  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 17:50:02.429596  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02 (perms=drwx------)
	I0729 17:50:02.429612  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:50:02.429626  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 17:50:02.429640  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:50:02.429657  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 17:50:02.429667  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:50:02.429681  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:50:02.429692  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home
	I0729 17:50:02.429716  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 17:50:02.429738  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Skipping /home - not owner
	I0729 17:50:02.429751  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:50:02.429766  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:50:02.429776  105708 main.go:141] libmachine: (ha-794405-m02) Creating domain...
	I0729 17:50:02.430552  105708 main.go:141] libmachine: (ha-794405-m02) define libvirt domain using xml: 
	I0729 17:50:02.430567  105708 main.go:141] libmachine: (ha-794405-m02) <domain type='kvm'>
	I0729 17:50:02.430574  105708 main.go:141] libmachine: (ha-794405-m02)   <name>ha-794405-m02</name>
	I0729 17:50:02.430579  105708 main.go:141] libmachine: (ha-794405-m02)   <memory unit='MiB'>2200</memory>
	I0729 17:50:02.430584  105708 main.go:141] libmachine: (ha-794405-m02)   <vcpu>2</vcpu>
	I0729 17:50:02.430588  105708 main.go:141] libmachine: (ha-794405-m02)   <features>
	I0729 17:50:02.430593  105708 main.go:141] libmachine: (ha-794405-m02)     <acpi/>
	I0729 17:50:02.430597  105708 main.go:141] libmachine: (ha-794405-m02)     <apic/>
	I0729 17:50:02.430602  105708 main.go:141] libmachine: (ha-794405-m02)     <pae/>
	I0729 17:50:02.430615  105708 main.go:141] libmachine: (ha-794405-m02)     
	I0729 17:50:02.430623  105708 main.go:141] libmachine: (ha-794405-m02)   </features>
	I0729 17:50:02.430634  105708 main.go:141] libmachine: (ha-794405-m02)   <cpu mode='host-passthrough'>
	I0729 17:50:02.430641  105708 main.go:141] libmachine: (ha-794405-m02)   
	I0729 17:50:02.430650  105708 main.go:141] libmachine: (ha-794405-m02)   </cpu>
	I0729 17:50:02.430658  105708 main.go:141] libmachine: (ha-794405-m02)   <os>
	I0729 17:50:02.430667  105708 main.go:141] libmachine: (ha-794405-m02)     <type>hvm</type>
	I0729 17:50:02.430675  105708 main.go:141] libmachine: (ha-794405-m02)     <boot dev='cdrom'/>
	I0729 17:50:02.430684  105708 main.go:141] libmachine: (ha-794405-m02)     <boot dev='hd'/>
	I0729 17:50:02.430691  105708 main.go:141] libmachine: (ha-794405-m02)     <bootmenu enable='no'/>
	I0729 17:50:02.430701  105708 main.go:141] libmachine: (ha-794405-m02)   </os>
	I0729 17:50:02.430711  105708 main.go:141] libmachine: (ha-794405-m02)   <devices>
	I0729 17:50:02.430721  105708 main.go:141] libmachine: (ha-794405-m02)     <disk type='file' device='cdrom'>
	I0729 17:50:02.430737  105708 main.go:141] libmachine: (ha-794405-m02)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/boot2docker.iso'/>
	I0729 17:50:02.430749  105708 main.go:141] libmachine: (ha-794405-m02)       <target dev='hdc' bus='scsi'/>
	I0729 17:50:02.430759  105708 main.go:141] libmachine: (ha-794405-m02)       <readonly/>
	I0729 17:50:02.430770  105708 main.go:141] libmachine: (ha-794405-m02)     </disk>
	I0729 17:50:02.430779  105708 main.go:141] libmachine: (ha-794405-m02)     <disk type='file' device='disk'>
	I0729 17:50:02.430805  105708 main.go:141] libmachine: (ha-794405-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:50:02.430822  105708 main.go:141] libmachine: (ha-794405-m02)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/ha-794405-m02.rawdisk'/>
	I0729 17:50:02.430832  105708 main.go:141] libmachine: (ha-794405-m02)       <target dev='hda' bus='virtio'/>
	I0729 17:50:02.430842  105708 main.go:141] libmachine: (ha-794405-m02)     </disk>
	I0729 17:50:02.430853  105708 main.go:141] libmachine: (ha-794405-m02)     <interface type='network'>
	I0729 17:50:02.430865  105708 main.go:141] libmachine: (ha-794405-m02)       <source network='mk-ha-794405'/>
	I0729 17:50:02.430876  105708 main.go:141] libmachine: (ha-794405-m02)       <model type='virtio'/>
	I0729 17:50:02.430887  105708 main.go:141] libmachine: (ha-794405-m02)     </interface>
	I0729 17:50:02.430897  105708 main.go:141] libmachine: (ha-794405-m02)     <interface type='network'>
	I0729 17:50:02.430910  105708 main.go:141] libmachine: (ha-794405-m02)       <source network='default'/>
	I0729 17:50:02.430921  105708 main.go:141] libmachine: (ha-794405-m02)       <model type='virtio'/>
	I0729 17:50:02.430932  105708 main.go:141] libmachine: (ha-794405-m02)     </interface>
	I0729 17:50:02.430943  105708 main.go:141] libmachine: (ha-794405-m02)     <serial type='pty'>
	I0729 17:50:02.430954  105708 main.go:141] libmachine: (ha-794405-m02)       <target port='0'/>
	I0729 17:50:02.430963  105708 main.go:141] libmachine: (ha-794405-m02)     </serial>
	I0729 17:50:02.430975  105708 main.go:141] libmachine: (ha-794405-m02)     <console type='pty'>
	I0729 17:50:02.430986  105708 main.go:141] libmachine: (ha-794405-m02)       <target type='serial' port='0'/>
	I0729 17:50:02.430998  105708 main.go:141] libmachine: (ha-794405-m02)     </console>
	I0729 17:50:02.431009  105708 main.go:141] libmachine: (ha-794405-m02)     <rng model='virtio'>
	I0729 17:50:02.431021  105708 main.go:141] libmachine: (ha-794405-m02)       <backend model='random'>/dev/random</backend>
	I0729 17:50:02.431031  105708 main.go:141] libmachine: (ha-794405-m02)     </rng>
	I0729 17:50:02.431042  105708 main.go:141] libmachine: (ha-794405-m02)     
	I0729 17:50:02.431051  105708 main.go:141] libmachine: (ha-794405-m02)     
	I0729 17:50:02.431060  105708 main.go:141] libmachine: (ha-794405-m02)   </devices>
	I0729 17:50:02.431069  105708 main.go:141] libmachine: (ha-794405-m02) </domain>
	I0729 17:50:02.431082  105708 main.go:141] libmachine: (ha-794405-m02) 
	I0729 17:50:02.438140  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:8c:42:bd in network default
	I0729 17:50:02.438673  105708 main.go:141] libmachine: (ha-794405-m02) Ensuring networks are active...
	I0729 17:50:02.438694  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:02.439393  105708 main.go:141] libmachine: (ha-794405-m02) Ensuring network default is active
	I0729 17:50:02.439724  105708 main.go:141] libmachine: (ha-794405-m02) Ensuring network mk-ha-794405 is active
	I0729 17:50:02.440088  105708 main.go:141] libmachine: (ha-794405-m02) Getting domain xml...
	I0729 17:50:02.440815  105708 main.go:141] libmachine: (ha-794405-m02) Creating domain...
	I0729 17:50:02.797267  105708 main.go:141] libmachine: (ha-794405-m02) Waiting to get IP...
	I0729 17:50:02.798142  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:02.798581  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:02.798610  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:02.798550  106097 retry.go:31] will retry after 292.596043ms: waiting for machine to come up
	I0729 17:50:03.093110  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:03.093578  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:03.093626  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:03.093525  106097 retry.go:31] will retry after 249.181248ms: waiting for machine to come up
	I0729 17:50:03.343933  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:03.344384  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:03.344415  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:03.344334  106097 retry.go:31] will retry after 435.80599ms: waiting for machine to come up
	I0729 17:50:03.781921  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:03.782363  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:03.782390  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:03.782318  106097 retry.go:31] will retry after 521.033043ms: waiting for machine to come up
	I0729 17:50:04.305096  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:04.305521  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:04.305587  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:04.305510  106097 retry.go:31] will retry after 689.093873ms: waiting for machine to come up
	I0729 17:50:04.996280  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:04.996755  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:04.996780  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:04.996706  106097 retry.go:31] will retry after 952.96779ms: waiting for machine to come up
	I0729 17:50:05.950893  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:05.951247  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:05.951276  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:05.951214  106097 retry.go:31] will retry after 747.920675ms: waiting for machine to come up
	I0729 17:50:06.701350  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:06.701685  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:06.701716  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:06.701666  106097 retry.go:31] will retry after 1.243871709s: waiting for machine to come up
	I0729 17:50:07.946750  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:07.947219  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:07.947250  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:07.947160  106097 retry.go:31] will retry after 1.671917885s: waiting for machine to come up
	I0729 17:50:09.620903  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:09.621411  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:09.621444  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:09.621353  106097 retry.go:31] will retry after 2.136646754s: waiting for machine to come up
	I0729 17:50:11.760209  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:11.760703  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:11.760732  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:11.760630  106097 retry.go:31] will retry after 1.864944726s: waiting for machine to come up
	I0729 17:50:13.628039  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:13.628439  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:13.628461  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:13.628402  106097 retry.go:31] will retry after 3.226289483s: waiting for machine to come up
	I0729 17:50:16.858269  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:16.858719  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:16.858750  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:16.858653  106097 retry.go:31] will retry after 3.139463175s: waiting for machine to come up
	I0729 17:50:20.002174  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:20.002520  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:20.002552  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:20.002473  106097 retry.go:31] will retry after 3.930462308s: waiting for machine to come up
	I0729 17:50:23.934909  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:23.935367  105708 main.go:141] libmachine: (ha-794405-m02) Found IP for machine: 192.168.39.62
	I0729 17:50:23.935398  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has current primary IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:23.935408  105708 main.go:141] libmachine: (ha-794405-m02) Reserving static IP address...
	I0729 17:50:23.935720  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find host DHCP lease matching {name: "ha-794405-m02", mac: "52:54:00:1a:4a:02", ip: "192.168.39.62"} in network mk-ha-794405
	I0729 17:50:24.008414  105708 main.go:141] libmachine: (ha-794405-m02) Reserved static IP address: 192.168.39.62
	I0729 17:50:24.008449  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Getting to WaitForSSH function...
	I0729 17:50:24.008458  105708 main.go:141] libmachine: (ha-794405-m02) Waiting for SSH to be available...
	I0729 17:50:24.010923  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.011287  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.011316  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.011448  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Using SSH client type: external
	I0729 17:50:24.011475  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa (-rw-------)
	I0729 17:50:24.011514  105708 main.go:141] libmachine: (ha-794405-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:50:24.011537  105708 main.go:141] libmachine: (ha-794405-m02) DBG | About to run SSH command:
	I0729 17:50:24.011554  105708 main.go:141] libmachine: (ha-794405-m02) DBG | exit 0
	I0729 17:50:24.136970  105708 main.go:141] libmachine: (ha-794405-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 17:50:24.137257  105708 main.go:141] libmachine: (ha-794405-m02) KVM machine creation complete!
	I0729 17:50:24.137629  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetConfigRaw
	I0729 17:50:24.138203  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:24.138427  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:24.138606  105708 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:50:24.138620  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:50:24.139891  105708 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:50:24.139913  105708 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:50:24.139930  105708 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:50:24.139937  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.142295  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.142636  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.142667  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.142783  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.142977  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.143144  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.143296  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.143499  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:24.143710  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:24.143722  105708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:50:24.248119  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:50:24.248143  105708 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:50:24.248152  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.250902  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.251267  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.251296  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.251429  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.251630  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.251763  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.251872  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.252028  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:24.252186  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:24.252197  105708 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:50:24.353332  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:50:24.353405  105708 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:50:24.353419  105708 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:50:24.353430  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetMachineName
	I0729 17:50:24.353674  105708 buildroot.go:166] provisioning hostname "ha-794405-m02"
	I0729 17:50:24.353708  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetMachineName
	I0729 17:50:24.353880  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.356482  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.356845  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.356894  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.357069  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.357246  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.357419  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.357576  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.357739  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:24.357902  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:24.357914  105708 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-794405-m02 && echo "ha-794405-m02" | sudo tee /etc/hostname
	I0729 17:50:24.475521  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-794405-m02
	
	I0729 17:50:24.475553  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.478081  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.478428  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.478453  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.478623  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.478799  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.478962  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.479099  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.479294  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:24.479463  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:24.479479  105708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-794405-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-794405-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-794405-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:50:24.589289  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:50:24.589319  105708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 17:50:24.589338  105708 buildroot.go:174] setting up certificates
	I0729 17:50:24.589348  105708 provision.go:84] configureAuth start
	I0729 17:50:24.589359  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetMachineName
	I0729 17:50:24.589626  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:50:24.592085  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.592383  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.592410  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.592488  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.594455  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.594773  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.594811  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.594914  105708 provision.go:143] copyHostCerts
	I0729 17:50:24.594962  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:50:24.594999  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 17:50:24.595011  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:50:24.595087  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 17:50:24.595174  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:50:24.595198  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 17:50:24.595207  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:50:24.595239  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 17:50:24.595301  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:50:24.595321  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 17:50:24.595330  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:50:24.595364  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 17:50:24.595429  105708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.ha-794405-m02 san=[127.0.0.1 192.168.39.62 ha-794405-m02 localhost minikube]
	I0729 17:50:24.689531  105708 provision.go:177] copyRemoteCerts
	I0729 17:50:24.689589  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:50:24.689613  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.691979  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.692254  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.692282  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.692399  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.692567  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.692703  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.692821  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	I0729 17:50:24.775583  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:50:24.775674  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:50:24.800666  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:50:24.800749  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 17:50:24.824627  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:50:24.824693  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:50:24.851265  105708 provision.go:87] duration metric: took 261.904202ms to configureAuth
	I0729 17:50:24.851288  105708 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:50:24.851485  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:24.851574  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.854353  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.854751  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.854774  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.854972  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.855187  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.855369  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.855527  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.855729  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:24.855895  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:24.855909  105708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:50:25.115172  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:50:25.115202  105708 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:50:25.115212  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetURL
	I0729 17:50:25.116573  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Using libvirt version 6000000
	I0729 17:50:25.118668  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.118991  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.119024  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.119225  105708 main.go:141] libmachine: Docker is up and running!
	I0729 17:50:25.119244  105708 main.go:141] libmachine: Reticulating splines...
	I0729 17:50:25.119252  105708 client.go:171] duration metric: took 23.173687306s to LocalClient.Create
	I0729 17:50:25.119275  105708 start.go:167] duration metric: took 23.173752916s to libmachine.API.Create "ha-794405"
	I0729 17:50:25.119285  105708 start.go:293] postStartSetup for "ha-794405-m02" (driver="kvm2")
	I0729 17:50:25.119295  105708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:50:25.119310  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:25.119560  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:50:25.119584  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:25.121881  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.122217  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.122249  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.122363  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:25.122553  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:25.122712  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:25.122844  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	I0729 17:50:25.202815  105708 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:50:25.207271  105708 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:50:25.207291  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 17:50:25.207351  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 17:50:25.207424  105708 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 17:50:25.207435  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /etc/ssl/certs/952822.pem
	I0729 17:50:25.207509  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:50:25.216379  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:50:25.239471  105708 start.go:296] duration metric: took 120.173209ms for postStartSetup
	I0729 17:50:25.239520  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetConfigRaw
	I0729 17:50:25.240058  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:50:25.242548  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.243044  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.243075  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.243323  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:50:25.243501  105708 start.go:128] duration metric: took 23.31679432s to createHost
	I0729 17:50:25.243523  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:25.245677  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.245977  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.246005  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.246151  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:25.246321  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:25.246430  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:25.246510  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:25.246631  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:25.246875  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:25.246889  105708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:50:25.349435  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275425.326707607
	
	I0729 17:50:25.349459  105708 fix.go:216] guest clock: 1722275425.326707607
	I0729 17:50:25.349468  105708 fix.go:229] Guest: 2024-07-29 17:50:25.326707607 +0000 UTC Remote: 2024-07-29 17:50:25.243512506 +0000 UTC m=+82.453073606 (delta=83.195101ms)
	I0729 17:50:25.349492  105708 fix.go:200] guest clock delta is within tolerance: 83.195101ms
	I0729 17:50:25.349499  105708 start.go:83] releasing machines lock for "ha-794405-m02", held for 23.422883421s
	I0729 17:50:25.349518  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:25.349804  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:50:25.352168  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.352505  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.352539  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.354836  105708 out.go:177] * Found network options:
	I0729 17:50:25.356053  105708 out.go:177]   - NO_PROXY=192.168.39.102
	W0729 17:50:25.357226  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:50:25.357252  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:25.357733  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:25.357902  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:25.357962  105708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:50:25.358006  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	W0729 17:50:25.358096  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:50:25.358156  105708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:50:25.358171  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:25.360594  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.360887  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.360935  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.360956  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.361069  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:25.361218  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:25.361285  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.361314  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.361374  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:25.361481  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:25.361551  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	I0729 17:50:25.361623  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:25.361793  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:25.361944  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	I0729 17:50:25.592375  105708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:50:25.598665  105708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:50:25.598719  105708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:50:25.615605  105708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:50:25.615623  105708 start.go:495] detecting cgroup driver to use...
	I0729 17:50:25.615677  105708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:50:25.632375  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:50:25.645620  105708 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:50:25.645660  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:50:25.659561  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:50:25.675559  105708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:50:25.786519  105708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:50:25.949904  105708 docker.go:233] disabling docker service ...
	I0729 17:50:25.949987  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:50:25.964662  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:50:25.977981  105708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:50:26.112688  105708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:50:26.246776  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:50:26.261490  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:50:26.280323  105708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:50:26.280405  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.291243  105708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:50:26.291317  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.301961  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.312821  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.324499  105708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:50:26.336637  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.348224  105708 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.365485  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.375878  105708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:50:26.385363  105708 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:50:26.385418  105708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:50:26.400237  105708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:50:26.410288  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:50:26.531664  105708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:50:26.667501  105708 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:50:26.667594  105708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:50:26.672733  105708 start.go:563] Will wait 60s for crictl version
	I0729 17:50:26.672799  105708 ssh_runner.go:195] Run: which crictl
	I0729 17:50:26.676328  105708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:50:26.718978  105708 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:50:26.719077  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:50:26.747155  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:50:26.777360  105708 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:50:26.778668  105708 out.go:177]   - env NO_PROXY=192.168.39.102
	I0729 17:50:26.779784  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:50:26.782353  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:26.782734  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:26.782769  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:26.782943  105708 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:50:26.786976  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:50:26.799733  105708 mustload.go:65] Loading cluster: ha-794405
	I0729 17:50:26.799968  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:26.800252  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:26.800281  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:26.814821  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0729 17:50:26.815291  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:26.815811  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:26.815836  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:26.816141  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:26.816326  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:50:26.817950  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:50:26.818339  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:26.818389  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:26.833845  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0729 17:50:26.834398  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:26.835057  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:26.835083  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:26.835437  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:26.835641  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:50:26.835796  105708 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405 for IP: 192.168.39.62
	I0729 17:50:26.835819  105708 certs.go:194] generating shared ca certs ...
	I0729 17:50:26.835834  105708 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:50:26.835958  105708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 17:50:26.835996  105708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 17:50:26.836006  105708 certs.go:256] generating profile certs ...
	I0729 17:50:26.836077  105708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key
	I0729 17:50:26.836100  105708 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.838ee660
	I0729 17:50:26.836114  105708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.838ee660 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.62 192.168.39.254]
	I0729 17:50:26.888048  105708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.838ee660 ...
	I0729 17:50:26.888075  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.838ee660: {Name:mkfc61a8a666685e5f20b7ed9465d09419315008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:50:26.888258  105708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.838ee660 ...
	I0729 17:50:26.888276  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.838ee660: {Name:mke59070840099e39d97d4ecf9944713af9aa4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:50:26.888368  105708 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.838ee660 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt
	I0729 17:50:26.888534  105708 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.838ee660 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key
	I0729 17:50:26.888706  105708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key
	I0729 17:50:26.888727  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:50:26.888745  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:50:26.888764  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:50:26.888783  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:50:26.888800  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:50:26.888820  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:50:26.888838  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:50:26.888876  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:50:26.888986  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 17:50:26.889035  105708 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 17:50:26.889048  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:50:26.889080  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:50:26.889112  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:50:26.889143  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 17:50:26.889217  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:50:26.889271  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:50:26.889291  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem -> /usr/share/ca-certificates/95282.pem
	I0729 17:50:26.889308  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /usr/share/ca-certificates/952822.pem
	I0729 17:50:26.889348  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:50:26.892493  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:26.892951  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:50:26.892980  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:26.893125  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:50:26.893315  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:50:26.893443  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:50:26.893573  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:50:26.965153  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 17:50:26.969848  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 17:50:26.982564  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 17:50:26.986579  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 17:50:26.996948  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 17:50:27.001030  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 17:50:27.010925  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 17:50:27.014937  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 17:50:27.029179  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 17:50:27.034973  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 17:50:27.046033  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 17:50:27.050116  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 17:50:27.060140  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:50:27.086899  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:50:27.111873  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:50:27.136912  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:50:27.161297  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 17:50:27.183852  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 17:50:27.205820  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:50:27.227387  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:50:27.252648  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:50:27.276299  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 17:50:27.298286  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 17:50:27.320740  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 17:50:27.336410  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 17:50:27.351811  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 17:50:27.367302  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 17:50:27.382567  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 17:50:27.398845  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 17:50:27.414879  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 17:50:27.431677  105708 ssh_runner.go:195] Run: openssl version
	I0729 17:50:27.437000  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:50:27.447184  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:50:27.451433  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:50:27.451488  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:50:27.457153  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:50:27.467476  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 17:50:27.477555  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 17:50:27.481669  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 17:50:27.481714  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 17:50:27.487385  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 17:50:27.498987  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 17:50:27.510739  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 17:50:27.515250  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 17:50:27.515318  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 17:50:27.520988  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:50:27.532716  105708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:50:27.536746  105708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:50:27.536801  105708 kubeadm.go:934] updating node {m02 192.168.39.62 8443 v1.30.3 crio true true} ...
	I0729 17:50:27.536934  105708 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-794405-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:50:27.536968  105708 kube-vip.go:115] generating kube-vip config ...
	I0729 17:50:27.537001  105708 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:50:27.554348  105708 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:50:27.554410  105708 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:50:27.554455  105708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:50:27.565114  105708 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 17:50:27.565165  105708 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 17:50:27.575660  105708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 17:50:27.575672  105708 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 17:50:27.575686  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:50:27.575694  105708 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 17:50:27.575760  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:50:27.580013  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 17:50:27.580046  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 17:50:28.382415  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:50:28.382508  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:50:28.388642  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 17:50:28.388679  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 17:50:28.499340  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:50:28.536388  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:50:28.536512  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:50:28.550613  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 17:50:28.550655  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 17:50:29.021828  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 17:50:29.031476  105708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 17:50:29.048427  105708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:50:29.063977  105708 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 17:50:29.080284  105708 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:50:29.084172  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:50:29.095268  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:50:29.215227  105708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:50:29.232465  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:50:29.233009  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:29.233069  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:29.248039  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45755
	I0729 17:50:29.248592  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:29.249043  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:29.249066  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:29.249395  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:29.249590  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:50:29.249744  105708 start.go:317] joinCluster: &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:50:29.249846  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 17:50:29.249870  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:50:29.252743  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:29.253163  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:50:29.253193  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:29.253322  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:50:29.253494  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:50:29.253657  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:50:29.253799  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:50:29.421845  105708 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:50:29.421894  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ur0h6k.06ti7dkwwdnzm66h --discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-794405-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443"
	I0729 17:50:52.245826  105708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ur0h6k.06ti7dkwwdnzm66h --discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-794405-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443": (22.823897398s)
	I0729 17:50:52.245871  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 17:50:52.752541  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-794405-m02 minikube.k8s.io/updated_at=2024_07_29T17_50_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=ha-794405 minikube.k8s.io/primary=false
	I0729 17:50:52.913172  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-794405-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 17:50:53.028489  105708 start.go:319] duration metric: took 23.778741939s to joinCluster
	I0729 17:50:53.028577  105708 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:50:53.028882  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:53.032153  105708 out.go:177] * Verifying Kubernetes components...
	I0729 17:50:53.033313  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:50:53.303312  105708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:50:53.357367  105708 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:50:53.357719  105708 kapi.go:59] client config for ha-794405: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt", KeyFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key", CAFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 17:50:53.357803  105708 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0729 17:50:53.358125  105708 node_ready.go:35] waiting up to 6m0s for node "ha-794405-m02" to be "Ready" ...
	I0729 17:50:53.358245  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:53.358256  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:53.358267  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:53.358276  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:53.372063  105708 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 17:50:53.859331  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:53.859352  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:53.859360  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:53.859365  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:53.867596  105708 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 17:50:54.358917  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:54.358941  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:54.358950  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:54.358952  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:54.363593  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:50:54.859319  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:54.859341  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:54.859348  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:54.859352  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:54.864698  105708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:50:55.359044  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:55.359071  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:55.359084  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:55.359090  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:55.364653  105708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:50:55.365354  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:50:55.859282  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:55.859302  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:55.859311  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:55.859315  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:55.863618  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:50:56.358964  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:56.358994  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:56.359005  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:56.359012  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:56.362213  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:50:56.858760  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:56.858779  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:56.858787  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:56.858791  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:56.861362  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:50:57.358925  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:57.358946  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:57.358955  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:57.358959  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:57.361698  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:50:57.858500  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:57.858522  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:57.858530  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:57.858538  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:57.863169  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:50:57.863967  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:50:58.358645  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:58.358667  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:58.358675  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:58.358679  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:58.361278  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:50:58.859319  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:58.859341  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:58.859349  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:58.859354  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:58.862367  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:50:59.358923  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:59.358952  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:59.358964  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:59.358970  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:59.368443  105708 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0729 17:50:59.858445  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:59.858475  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:59.858487  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:59.858493  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:59.861504  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:00.358665  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:00.358687  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:00.358695  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:00.358698  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:00.361342  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:00.361796  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:51:00.859287  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:00.859310  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:00.859317  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:00.859320  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:00.862958  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:01.358364  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:01.358386  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:01.358394  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:01.358399  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:01.361432  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:01.859047  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:01.859074  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:01.859086  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:01.859094  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:01.862035  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:02.358546  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:02.358569  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:02.358577  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:02.358581  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:02.361609  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:02.362133  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:51:02.858906  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:02.858927  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:02.858940  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:02.858944  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:02.862407  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:03.359138  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:03.359164  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:03.359173  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:03.359178  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:03.361986  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:03.859104  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:03.859127  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:03.859136  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:03.859139  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:03.862426  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:04.359063  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:04.359087  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:04.359095  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:04.359099  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:04.362334  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:04.362999  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:51:04.859366  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:04.859388  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:04.859397  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:04.859400  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:04.862279  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:05.358324  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:05.358345  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:05.358353  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:05.358358  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:05.361284  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:05.859219  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:05.859242  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:05.859250  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:05.859254  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:05.861973  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:06.359344  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:06.359367  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:06.359375  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:06.359378  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:06.362493  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:06.363234  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:51:06.858498  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:06.858521  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:06.858530  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:06.858534  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:06.861598  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:07.358370  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:07.358396  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.358409  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.358414  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.361864  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:07.362602  105708 node_ready.go:49] node "ha-794405-m02" has status "Ready":"True"
	I0729 17:51:07.362623  105708 node_ready.go:38] duration metric: took 14.004476488s for node "ha-794405-m02" to be "Ready" ...
	I0729 17:51:07.362631  105708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:51:07.362705  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:51:07.362718  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.362728  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.362737  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.368261  105708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:51:07.375052  105708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.375139  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bb2jg
	I0729 17:51:07.375151  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.375162  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.375168  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.378064  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:07.378817  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:07.378839  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.378849  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.378858  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.381407  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:07.383608  105708 pod_ready.go:92] pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:07.383625  105708 pod_ready.go:81] duration metric: took 8.550201ms for pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.383634  105708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.383683  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nzvff
	I0729 17:51:07.383690  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.383696  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.383704  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.388595  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:51:07.389483  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:07.389498  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.389507  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.389511  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.391907  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:07.392441  105708 pod_ready.go:92] pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:07.392456  105708 pod_ready.go:81] duration metric: took 8.810378ms for pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.392466  105708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.392507  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405
	I0729 17:51:07.392515  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.392521  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.392525  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.394731  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:07.395273  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:07.395295  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.395308  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.395314  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.397197  105708 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 17:51:07.397787  105708 pod_ready.go:92] pod "etcd-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:07.397814  105708 pod_ready.go:81] duration metric: took 5.34175ms for pod "etcd-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.397826  105708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.397886  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:07.397896  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.397905  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.397913  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.400537  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:07.401088  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:07.401101  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.401108  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.401115  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.402916  105708 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 17:51:07.899036  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:07.899066  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.899078  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.899084  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.909049  105708 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0729 17:51:07.909963  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:07.909980  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.909988  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.909992  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.912758  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:08.398045  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:08.398072  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:08.398079  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:08.398085  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:08.401591  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:08.402289  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:08.402305  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:08.402312  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:08.402316  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:08.405255  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:08.898105  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:08.898124  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:08.898133  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:08.898137  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:08.901450  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:08.902305  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:08.902322  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:08.902332  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:08.902338  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:08.904769  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:09.398045  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:09.398067  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:09.398075  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:09.398080  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:09.401424  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:09.402429  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:09.402446  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:09.402452  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:09.402456  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:09.405058  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:09.405629  105708 pod_ready.go:102] pod "etcd-ha-794405-m02" in "kube-system" namespace has status "Ready":"False"
	I0729 17:51:09.898067  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:09.898091  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:09.898099  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:09.898103  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:09.901384  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:09.902045  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:09.902061  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:09.902068  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:09.902073  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:09.904490  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.398417  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:10.398442  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.398450  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.398455  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.401415  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.401997  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:10.402012  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.402019  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.402022  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.404835  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.405333  105708 pod_ready.go:92] pod "etcd-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:10.405350  105708 pod_ready.go:81] duration metric: took 3.007512748s for pod "etcd-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.405367  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.405423  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405
	I0729 17:51:10.405432  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.405442  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.405447  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.407827  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.408297  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:10.408311  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.408320  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.408325  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.410380  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.410835  105708 pod_ready.go:92] pod "kube-apiserver-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:10.410849  105708 pod_ready.go:81] duration metric: took 5.474903ms for pod "kube-apiserver-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.410857  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.410904  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405-m02
	I0729 17:51:10.410912  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.410918  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.410921  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.413082  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.559008  105708 request.go:629] Waited for 145.311469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:10.559063  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:10.559068  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.559075  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.559078  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.561945  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.562429  105708 pod_ready.go:92] pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:10.562446  105708 pod_ready.go:81] duration metric: took 151.584306ms for pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.562456  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.758890  105708 request.go:629] Waited for 196.352271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405
	I0729 17:51:10.758959  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405
	I0729 17:51:10.758966  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.758977  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.758984  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.762199  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:10.959007  105708 request.go:629] Waited for 196.057263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:10.959072  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:10.959080  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.959089  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.959096  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.962208  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:10.962916  105708 pod_ready.go:92] pod "kube-controller-manager-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:10.962937  105708 pod_ready.go:81] duration metric: took 400.475478ms for pod "kube-controller-manager-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.962948  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:11.159321  105708 request.go:629] Waited for 196.305681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m02
	I0729 17:51:11.159396  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m02
	I0729 17:51:11.159401  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:11.159409  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:11.159414  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:11.162769  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:11.358949  105708 request.go:629] Waited for 195.417223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:11.359029  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:11.359034  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:11.359041  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:11.359046  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:11.362003  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:11.362642  105708 pod_ready.go:92] pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:11.362661  105708 pod_ready.go:81] duration metric: took 399.706913ms for pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:11.362676  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-llkz8" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:11.558798  105708 request.go:629] Waited for 196.045783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llkz8
	I0729 17:51:11.558883  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llkz8
	I0729 17:51:11.558890  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:11.558901  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:11.558910  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:11.562626  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:11.758813  105708 request.go:629] Waited for 195.361111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:11.758876  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:11.758881  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:11.758889  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:11.758895  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:11.761854  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:11.762584  105708 pod_ready.go:92] pod "kube-proxy-llkz8" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:11.762611  105708 pod_ready.go:81] duration metric: took 399.920602ms for pod "kube-proxy-llkz8" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:11.762620  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qcmxl" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:11.958999  105708 request.go:629] Waited for 196.309399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qcmxl
	I0729 17:51:11.959070  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qcmxl
	I0729 17:51:11.959080  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:11.959091  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:11.959101  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:11.962553  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:12.158622  105708 request.go:629] Waited for 195.277383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:12.158686  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:12.158692  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.158701  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.158706  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.161758  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:12.162313  105708 pod_ready.go:92] pod "kube-proxy-qcmxl" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:12.162331  105708 pod_ready.go:81] duration metric: took 399.705375ms for pod "kube-proxy-qcmxl" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:12.162343  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:12.358408  105708 request.go:629] Waited for 195.986243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405
	I0729 17:51:12.358505  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405
	I0729 17:51:12.358518  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.358528  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.358533  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.361719  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:12.558594  105708 request.go:629] Waited for 196.298605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:12.558662  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:12.558668  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.558675  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.558679  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.561636  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:12.562230  105708 pod_ready.go:92] pod "kube-scheduler-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:12.562250  105708 pod_ready.go:81] duration metric: took 399.901327ms for pod "kube-scheduler-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:12.562260  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:12.759320  105708 request.go:629] Waited for 196.976772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m02
	I0729 17:51:12.759381  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m02
	I0729 17:51:12.759386  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.759393  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.759397  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.762572  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:12.959118  105708 request.go:629] Waited for 195.846133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:12.959175  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:12.959179  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.959186  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.959191  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.962116  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:12.962744  105708 pod_ready.go:92] pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:12.962764  105708 pod_ready.go:81] duration metric: took 400.498045ms for pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:12.962774  105708 pod_ready.go:38] duration metric: took 5.600132075s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:51:12.962790  105708 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:51:12.962842  105708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:51:12.978299  105708 api_server.go:72] duration metric: took 19.949674148s to wait for apiserver process to appear ...
	I0729 17:51:12.978317  105708 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:51:12.978338  105708 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0729 17:51:12.982647  105708 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0729 17:51:12.982708  105708 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0729 17:51:12.982715  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.982723  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.982728  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.983642  105708 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 17:51:12.983761  105708 api_server.go:141] control plane version: v1.30.3
	I0729 17:51:12.983784  105708 api_server.go:131] duration metric: took 5.459255ms to wait for apiserver health ...
	I0729 17:51:12.983794  105708 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 17:51:13.159231  105708 request.go:629] Waited for 175.337331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:51:13.159291  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:51:13.159295  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:13.159303  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:13.159310  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:13.164029  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:51:13.168981  105708 system_pods.go:59] 17 kube-system pods found
	I0729 17:51:13.169011  105708 system_pods.go:61] "coredns-7db6d8ff4d-bb2jg" [ee9ad335-25b2-4e6c-a523-47b06ce713dc] Running
	I0729 17:51:13.169016  105708 system_pods.go:61] "coredns-7db6d8ff4d-nzvff" [b1e2c116-2549-4e1a-8d79-cd86595db9f3] Running
	I0729 17:51:13.169019  105708 system_pods.go:61] "etcd-ha-794405" [95b13c7f-e6bf-4225-a00b-de1fefb711ac] Running
	I0729 17:51:13.169023  105708 system_pods.go:61] "etcd-ha-794405-m02" [5c99a8df-66da-45b0-b62a-30e17113a35d] Running
	I0729 17:51:13.169027  105708 system_pods.go:61] "kindnet-8qgq5" [0b2b707a-4283-41cd-9f3b-83b6f2d169cf] Running
	I0729 17:51:13.169031  105708 system_pods.go:61] "kindnet-j4l89" [c0b81d74-531b-4878-84ea-654e7b57f0ba] Running
	I0729 17:51:13.169036  105708 system_pods.go:61] "kube-apiserver-ha-794405" [32e07436-25ae-4f51-b0e6-004ed954d8b7] Running
	I0729 17:51:13.169041  105708 system_pods.go:61] "kube-apiserver-ha-794405-m02" [0e7afc5b-62cf-4874-ade9-1df7a6c3926d] Running
	I0729 17:51:13.169046  105708 system_pods.go:61] "kube-controller-manager-ha-794405" [c2d30a4e-d2bc-4221-b8d9-98b67932e4ab] Running
	I0729 17:51:13.169051  105708 system_pods.go:61] "kube-controller-manager-ha-794405-m02" [88e040cd-4d7f-4a2e-97cb-4f24249d1c82] Running
	I0729 17:51:13.169058  105708 system_pods.go:61] "kube-proxy-llkz8" [95536eff-3f12-4a7e-9504-c8f6b1acc4cb] Running
	I0729 17:51:13.169062  105708 system_pods.go:61] "kube-proxy-qcmxl" [963fc8f8-3080-4602-9437-d2060c7ea622] Running
	I0729 17:51:13.169068  105708 system_pods.go:61] "kube-scheduler-ha-794405" [b41eb8ec-bf30-4cc6-b454-28305ccf70b5] Running
	I0729 17:51:13.169073  105708 system_pods.go:61] "kube-scheduler-ha-794405-m02" [8a0bfe0d-b80a-4799-8371-84041938bf1d] Running
	I0729 17:51:13.169081  105708 system_pods.go:61] "kube-vip-ha-794405" [0e782ab8-0d52-4894-b003-493294ab4710] Running
	I0729 17:51:13.169086  105708 system_pods.go:61] "kube-vip-ha-794405-m02" [f7e2f40a-28ab-4655-a82c-11df8cf806d5] Running
	I0729 17:51:13.169092  105708 system_pods.go:61] "storage-provisioner" [0e08d093-f8b5-4614-9be2-5832f7cafa75] Running
	I0729 17:51:13.169098  105708 system_pods.go:74] duration metric: took 185.297964ms to wait for pod list to return data ...
	I0729 17:51:13.169108  105708 default_sa.go:34] waiting for default service account to be created ...
	I0729 17:51:13.358462  105708 request.go:629] Waited for 189.275415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:51:13.358534  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:51:13.358547  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:13.358557  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:13.358568  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:13.361778  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:13.362015  105708 default_sa.go:45] found service account: "default"
	I0729 17:51:13.362033  105708 default_sa.go:55] duration metric: took 192.917988ms for default service account to be created ...
	I0729 17:51:13.362042  105708 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 17:51:13.559189  105708 request.go:629] Waited for 197.080882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:51:13.559261  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:51:13.559268  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:13.559278  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:13.559288  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:13.564241  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:51:13.568460  105708 system_pods.go:86] 17 kube-system pods found
	I0729 17:51:13.568486  105708 system_pods.go:89] "coredns-7db6d8ff4d-bb2jg" [ee9ad335-25b2-4e6c-a523-47b06ce713dc] Running
	I0729 17:51:13.568491  105708 system_pods.go:89] "coredns-7db6d8ff4d-nzvff" [b1e2c116-2549-4e1a-8d79-cd86595db9f3] Running
	I0729 17:51:13.568495  105708 system_pods.go:89] "etcd-ha-794405" [95b13c7f-e6bf-4225-a00b-de1fefb711ac] Running
	I0729 17:51:13.568499  105708 system_pods.go:89] "etcd-ha-794405-m02" [5c99a8df-66da-45b0-b62a-30e17113a35d] Running
	I0729 17:51:13.568503  105708 system_pods.go:89] "kindnet-8qgq5" [0b2b707a-4283-41cd-9f3b-83b6f2d169cf] Running
	I0729 17:51:13.568507  105708 system_pods.go:89] "kindnet-j4l89" [c0b81d74-531b-4878-84ea-654e7b57f0ba] Running
	I0729 17:51:13.568511  105708 system_pods.go:89] "kube-apiserver-ha-794405" [32e07436-25ae-4f51-b0e6-004ed954d8b7] Running
	I0729 17:51:13.568515  105708 system_pods.go:89] "kube-apiserver-ha-794405-m02" [0e7afc5b-62cf-4874-ade9-1df7a6c3926d] Running
	I0729 17:51:13.568519  105708 system_pods.go:89] "kube-controller-manager-ha-794405" [c2d30a4e-d2bc-4221-b8d9-98b67932e4ab] Running
	I0729 17:51:13.568523  105708 system_pods.go:89] "kube-controller-manager-ha-794405-m02" [88e040cd-4d7f-4a2e-97cb-4f24249d1c82] Running
	I0729 17:51:13.568527  105708 system_pods.go:89] "kube-proxy-llkz8" [95536eff-3f12-4a7e-9504-c8f6b1acc4cb] Running
	I0729 17:51:13.568531  105708 system_pods.go:89] "kube-proxy-qcmxl" [963fc8f8-3080-4602-9437-d2060c7ea622] Running
	I0729 17:51:13.568534  105708 system_pods.go:89] "kube-scheduler-ha-794405" [b41eb8ec-bf30-4cc6-b454-28305ccf70b5] Running
	I0729 17:51:13.568538  105708 system_pods.go:89] "kube-scheduler-ha-794405-m02" [8a0bfe0d-b80a-4799-8371-84041938bf1d] Running
	I0729 17:51:13.568544  105708 system_pods.go:89] "kube-vip-ha-794405" [0e782ab8-0d52-4894-b003-493294ab4710] Running
	I0729 17:51:13.568550  105708 system_pods.go:89] "kube-vip-ha-794405-m02" [f7e2f40a-28ab-4655-a82c-11df8cf806d5] Running
	I0729 17:51:13.568555  105708 system_pods.go:89] "storage-provisioner" [0e08d093-f8b5-4614-9be2-5832f7cafa75] Running
	I0729 17:51:13.568561  105708 system_pods.go:126] duration metric: took 206.513897ms to wait for k8s-apps to be running ...
	I0729 17:51:13.568570  105708 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 17:51:13.568616  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:51:13.584105  105708 system_svc.go:56] duration metric: took 15.522568ms WaitForService to wait for kubelet
	I0729 17:51:13.584136  105708 kubeadm.go:582] duration metric: took 20.555513243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:51:13.584155  105708 node_conditions.go:102] verifying NodePressure condition ...
	I0729 17:51:13.758502  105708 request.go:629] Waited for 174.254052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0729 17:51:13.758577  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0729 17:51:13.758584  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:13.758592  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:13.758599  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:13.763156  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:51:13.764285  105708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:51:13.764311  105708 node_conditions.go:123] node cpu capacity is 2
	I0729 17:51:13.764322  105708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:51:13.764326  105708 node_conditions.go:123] node cpu capacity is 2
	I0729 17:51:13.764331  105708 node_conditions.go:105] duration metric: took 180.172008ms to run NodePressure ...
	I0729 17:51:13.764342  105708 start.go:241] waiting for startup goroutines ...
	I0729 17:51:13.764365  105708 start.go:255] writing updated cluster config ...
	I0729 17:51:13.766333  105708 out.go:177] 
	I0729 17:51:13.767774  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:51:13.767861  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:51:13.769575  105708 out.go:177] * Starting "ha-794405-m03" control-plane node in "ha-794405" cluster
	I0729 17:51:13.770820  105708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:51:13.770842  105708 cache.go:56] Caching tarball of preloaded images
	I0729 17:51:13.770959  105708 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:51:13.770974  105708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:51:13.771093  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:51:13.771292  105708 start.go:360] acquireMachinesLock for ha-794405-m03: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:51:13.771340  105708 start.go:364] duration metric: took 27.932µs to acquireMachinesLock for "ha-794405-m03"
	I0729 17:51:13.771364  105708 start.go:93] Provisioning new machine with config: &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:51:13.771491  105708 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 17:51:13.772994  105708 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:51:13.773093  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:51:13.773134  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:51:13.789231  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0729 17:51:13.789690  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:51:13.790213  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:51:13.790238  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:51:13.790573  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:51:13.790738  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetMachineName
	I0729 17:51:13.790879  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:13.791028  105708 start.go:159] libmachine.API.Create for "ha-794405" (driver="kvm2")
	I0729 17:51:13.791052  105708 client.go:168] LocalClient.Create starting
	I0729 17:51:13.791076  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 17:51:13.791104  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:51:13.791118  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:51:13.791168  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 17:51:13.791188  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:51:13.791198  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:51:13.791215  105708 main.go:141] libmachine: Running pre-create checks...
	I0729 17:51:13.791222  105708 main.go:141] libmachine: (ha-794405-m03) Calling .PreCreateCheck
	I0729 17:51:13.791379  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetConfigRaw
	I0729 17:51:13.791697  105708 main.go:141] libmachine: Creating machine...
	I0729 17:51:13.791709  105708 main.go:141] libmachine: (ha-794405-m03) Calling .Create
	I0729 17:51:13.791855  105708 main.go:141] libmachine: (ha-794405-m03) Creating KVM machine...
	I0729 17:51:13.793425  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found existing default KVM network
	I0729 17:51:13.793547  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found existing private KVM network mk-ha-794405
	I0729 17:51:13.793721  105708 main.go:141] libmachine: (ha-794405-m03) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03 ...
	I0729 17:51:13.793749  105708 main.go:141] libmachine: (ha-794405-m03) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 17:51:13.793799  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:13.793686  106467 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:51:13.793884  105708 main.go:141] libmachine: (ha-794405-m03) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 17:51:14.056774  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:14.056635  106467 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa...
	I0729 17:51:14.310893  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:14.310745  106467 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/ha-794405-m03.rawdisk...
	I0729 17:51:14.310929  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Writing magic tar header
	I0729 17:51:14.310951  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Writing SSH key tar header
	I0729 17:51:14.310963  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:14.310866  106467 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03 ...
	I0729 17:51:14.310978  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03 (perms=drwx------)
	I0729 17:51:14.310997  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:51:14.311005  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 17:51:14.311021  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 17:51:14.311033  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:51:14.311047  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:51:14.311061  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03
	I0729 17:51:14.311078  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 17:51:14.311090  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:51:14.311099  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 17:51:14.311104  105708 main.go:141] libmachine: (ha-794405-m03) Creating domain...
	I0729 17:51:14.311137  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:51:14.311162  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:51:14.311175  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home
	I0729 17:51:14.311187  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Skipping /home - not owner
	I0729 17:51:14.312094  105708 main.go:141] libmachine: (ha-794405-m03) define libvirt domain using xml: 
	I0729 17:51:14.312115  105708 main.go:141] libmachine: (ha-794405-m03) <domain type='kvm'>
	I0729 17:51:14.312125  105708 main.go:141] libmachine: (ha-794405-m03)   <name>ha-794405-m03</name>
	I0729 17:51:14.312137  105708 main.go:141] libmachine: (ha-794405-m03)   <memory unit='MiB'>2200</memory>
	I0729 17:51:14.312148  105708 main.go:141] libmachine: (ha-794405-m03)   <vcpu>2</vcpu>
	I0729 17:51:14.312155  105708 main.go:141] libmachine: (ha-794405-m03)   <features>
	I0729 17:51:14.312162  105708 main.go:141] libmachine: (ha-794405-m03)     <acpi/>
	I0729 17:51:14.312167  105708 main.go:141] libmachine: (ha-794405-m03)     <apic/>
	I0729 17:51:14.312175  105708 main.go:141] libmachine: (ha-794405-m03)     <pae/>
	I0729 17:51:14.312185  105708 main.go:141] libmachine: (ha-794405-m03)     
	I0729 17:51:14.312192  105708 main.go:141] libmachine: (ha-794405-m03)   </features>
	I0729 17:51:14.312203  105708 main.go:141] libmachine: (ha-794405-m03)   <cpu mode='host-passthrough'>
	I0729 17:51:14.312226  105708 main.go:141] libmachine: (ha-794405-m03)   
	I0729 17:51:14.312242  105708 main.go:141] libmachine: (ha-794405-m03)   </cpu>
	I0729 17:51:14.312249  105708 main.go:141] libmachine: (ha-794405-m03)   <os>
	I0729 17:51:14.312261  105708 main.go:141] libmachine: (ha-794405-m03)     <type>hvm</type>
	I0729 17:51:14.312274  105708 main.go:141] libmachine: (ha-794405-m03)     <boot dev='cdrom'/>
	I0729 17:51:14.312283  105708 main.go:141] libmachine: (ha-794405-m03)     <boot dev='hd'/>
	I0729 17:51:14.312290  105708 main.go:141] libmachine: (ha-794405-m03)     <bootmenu enable='no'/>
	I0729 17:51:14.312296  105708 main.go:141] libmachine: (ha-794405-m03)   </os>
	I0729 17:51:14.312302  105708 main.go:141] libmachine: (ha-794405-m03)   <devices>
	I0729 17:51:14.312313  105708 main.go:141] libmachine: (ha-794405-m03)     <disk type='file' device='cdrom'>
	I0729 17:51:14.312325  105708 main.go:141] libmachine: (ha-794405-m03)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/boot2docker.iso'/>
	I0729 17:51:14.312335  105708 main.go:141] libmachine: (ha-794405-m03)       <target dev='hdc' bus='scsi'/>
	I0729 17:51:14.312347  105708 main.go:141] libmachine: (ha-794405-m03)       <readonly/>
	I0729 17:51:14.312355  105708 main.go:141] libmachine: (ha-794405-m03)     </disk>
	I0729 17:51:14.312365  105708 main.go:141] libmachine: (ha-794405-m03)     <disk type='file' device='disk'>
	I0729 17:51:14.312384  105708 main.go:141] libmachine: (ha-794405-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:51:14.312397  105708 main.go:141] libmachine: (ha-794405-m03)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/ha-794405-m03.rawdisk'/>
	I0729 17:51:14.312404  105708 main.go:141] libmachine: (ha-794405-m03)       <target dev='hda' bus='virtio'/>
	I0729 17:51:14.312410  105708 main.go:141] libmachine: (ha-794405-m03)     </disk>
	I0729 17:51:14.312417  105708 main.go:141] libmachine: (ha-794405-m03)     <interface type='network'>
	I0729 17:51:14.312423  105708 main.go:141] libmachine: (ha-794405-m03)       <source network='mk-ha-794405'/>
	I0729 17:51:14.312429  105708 main.go:141] libmachine: (ha-794405-m03)       <model type='virtio'/>
	I0729 17:51:14.312458  105708 main.go:141] libmachine: (ha-794405-m03)     </interface>
	I0729 17:51:14.312479  105708 main.go:141] libmachine: (ha-794405-m03)     <interface type='network'>
	I0729 17:51:14.312488  105708 main.go:141] libmachine: (ha-794405-m03)       <source network='default'/>
	I0729 17:51:14.312498  105708 main.go:141] libmachine: (ha-794405-m03)       <model type='virtio'/>
	I0729 17:51:14.312506  105708 main.go:141] libmachine: (ha-794405-m03)     </interface>
	I0729 17:51:14.312513  105708 main.go:141] libmachine: (ha-794405-m03)     <serial type='pty'>
	I0729 17:51:14.312521  105708 main.go:141] libmachine: (ha-794405-m03)       <target port='0'/>
	I0729 17:51:14.312528  105708 main.go:141] libmachine: (ha-794405-m03)     </serial>
	I0729 17:51:14.312563  105708 main.go:141] libmachine: (ha-794405-m03)     <console type='pty'>
	I0729 17:51:14.312584  105708 main.go:141] libmachine: (ha-794405-m03)       <target type='serial' port='0'/>
	I0729 17:51:14.312597  105708 main.go:141] libmachine: (ha-794405-m03)     </console>
	I0729 17:51:14.312606  105708 main.go:141] libmachine: (ha-794405-m03)     <rng model='virtio'>
	I0729 17:51:14.312617  105708 main.go:141] libmachine: (ha-794405-m03)       <backend model='random'>/dev/random</backend>
	I0729 17:51:14.312626  105708 main.go:141] libmachine: (ha-794405-m03)     </rng>
	I0729 17:51:14.312633  105708 main.go:141] libmachine: (ha-794405-m03)     
	I0729 17:51:14.312642  105708 main.go:141] libmachine: (ha-794405-m03)     
	I0729 17:51:14.312650  105708 main.go:141] libmachine: (ha-794405-m03)   </devices>
	I0729 17:51:14.312660  105708 main.go:141] libmachine: (ha-794405-m03) </domain>
	I0729 17:51:14.312686  105708 main.go:141] libmachine: (ha-794405-m03) 
	I0729 17:51:14.319904  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:ea:ab:24 in network default
	I0729 17:51:14.320556  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:14.320575  105708 main.go:141] libmachine: (ha-794405-m03) Ensuring networks are active...
	I0729 17:51:14.321415  105708 main.go:141] libmachine: (ha-794405-m03) Ensuring network default is active
	I0729 17:51:14.321796  105708 main.go:141] libmachine: (ha-794405-m03) Ensuring network mk-ha-794405 is active
	I0729 17:51:14.322436  105708 main.go:141] libmachine: (ha-794405-m03) Getting domain xml...
	I0729 17:51:14.323225  105708 main.go:141] libmachine: (ha-794405-m03) Creating domain...
	I0729 17:51:14.709258  105708 main.go:141] libmachine: (ha-794405-m03) Waiting to get IP...
	I0729 17:51:14.709927  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:14.710387  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:14.710412  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:14.710360  106467 retry.go:31] will retry after 248.338118ms: waiting for machine to come up
	I0729 17:51:14.960853  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:14.961324  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:14.961348  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:14.961283  106467 retry.go:31] will retry after 340.428087ms: waiting for machine to come up
	I0729 17:51:15.303827  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:15.304407  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:15.304427  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:15.304331  106467 retry.go:31] will retry after 410.973841ms: waiting for machine to come up
	I0729 17:51:15.716804  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:15.717300  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:15.717332  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:15.717250  106467 retry.go:31] will retry after 410.507652ms: waiting for machine to come up
	I0729 17:51:16.129586  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:16.130099  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:16.130127  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:16.130057  106467 retry.go:31] will retry after 580.57811ms: waiting for machine to come up
	I0729 17:51:16.711744  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:16.712255  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:16.712288  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:16.712210  106467 retry.go:31] will retry after 726.962476ms: waiting for machine to come up
	I0729 17:51:17.440785  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:17.441299  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:17.441327  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:17.441251  106467 retry.go:31] will retry after 1.017586827s: waiting for machine to come up
	I0729 17:51:18.460466  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:18.460923  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:18.460952  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:18.460877  106467 retry.go:31] will retry after 921.419747ms: waiting for machine to come up
	I0729 17:51:19.384477  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:19.385037  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:19.385065  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:19.384979  106467 retry.go:31] will retry after 1.55396863s: waiting for machine to come up
	I0729 17:51:20.940699  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:20.941124  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:20.941156  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:20.941069  106467 retry.go:31] will retry after 1.592103368s: waiting for machine to come up
	I0729 17:51:22.535925  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:22.536388  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:22.536420  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:22.536336  106467 retry.go:31] will retry after 1.758793191s: waiting for machine to come up
	I0729 17:51:24.296892  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:24.297388  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:24.297419  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:24.297339  106467 retry.go:31] will retry after 2.570205531s: waiting for machine to come up
	I0729 17:51:26.869801  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:26.870190  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:26.870210  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:26.870167  106467 retry.go:31] will retry after 4.232098911s: waiting for machine to come up
	I0729 17:51:31.103439  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:31.103900  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:31.103930  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:31.103843  106467 retry.go:31] will retry after 5.307752085s: waiting for machine to come up
	I0729 17:51:36.414191  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.414633  105708 main.go:141] libmachine: (ha-794405-m03) Found IP for machine: 192.168.39.185
	I0729 17:51:36.414655  105708 main.go:141] libmachine: (ha-794405-m03) Reserving static IP address...
	I0729 17:51:36.414664  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has current primary IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.414997  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find host DHCP lease matching {name: "ha-794405-m03", mac: "52:54:00:6d:a7:17", ip: "192.168.39.185"} in network mk-ha-794405
	I0729 17:51:36.488205  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Getting to WaitForSSH function...
	I0729 17:51:36.488236  105708 main.go:141] libmachine: (ha-794405-m03) Reserved static IP address: 192.168.39.185
	I0729 17:51:36.488248  105708 main.go:141] libmachine: (ha-794405-m03) Waiting for SSH to be available...
	I0729 17:51:36.490876  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.491269  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:36.491303  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.491518  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Using SSH client type: external
	I0729 17:51:36.491547  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa (-rw-------)
	I0729 17:51:36.491581  105708 main.go:141] libmachine: (ha-794405-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:51:36.491595  105708 main.go:141] libmachine: (ha-794405-m03) DBG | About to run SSH command:
	I0729 17:51:36.491618  105708 main.go:141] libmachine: (ha-794405-m03) DBG | exit 0
	I0729 17:51:36.612830  105708 main.go:141] libmachine: (ha-794405-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 17:51:36.613119  105708 main.go:141] libmachine: (ha-794405-m03) KVM machine creation complete!
	I0729 17:51:36.613488  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetConfigRaw
	I0729 17:51:36.613983  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:36.614189  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:36.614354  105708 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:51:36.614367  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:51:36.615674  105708 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:51:36.615687  105708 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:51:36.615692  105708 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:51:36.615699  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:36.618113  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.618448  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:36.618474  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.618652  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:36.618844  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.618985  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.619096  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:36.619214  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:36.619400  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:36.619412  105708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:51:36.719979  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:51:36.720005  105708 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:51:36.720017  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:36.722991  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.723398  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:36.723425  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.723601  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:36.723807  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.723981  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.724109  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:36.724286  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:36.724471  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:36.724487  105708 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:51:36.825657  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:51:36.825720  105708 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:51:36.825731  105708 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:51:36.825739  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetMachineName
	I0729 17:51:36.826037  105708 buildroot.go:166] provisioning hostname "ha-794405-m03"
	I0729 17:51:36.826070  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetMachineName
	I0729 17:51:36.826288  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:36.829124  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.829573  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:36.829604  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.829739  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:36.829908  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.830079  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.830243  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:36.830406  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:36.830585  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:36.830600  105708 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-794405-m03 && echo "ha-794405-m03" | sudo tee /etc/hostname
	I0729 17:51:36.949282  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-794405-m03
	
	I0729 17:51:36.949307  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:36.952008  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.952366  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:36.952394  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.952586  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:36.952765  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.952932  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.953080  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:36.953277  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:36.953449  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:36.953471  105708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-794405-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-794405-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-794405-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:51:37.063551  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:51:37.063595  105708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 17:51:37.063612  105708 buildroot.go:174] setting up certificates
	I0729 17:51:37.063620  105708 provision.go:84] configureAuth start
	I0729 17:51:37.063629  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetMachineName
	I0729 17:51:37.063905  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:51:37.066402  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.066730  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.066760  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.066922  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.068894  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.069229  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.069255  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.069378  105708 provision.go:143] copyHostCerts
	I0729 17:51:37.069418  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:51:37.069458  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 17:51:37.069468  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:51:37.069551  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 17:51:37.069643  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:51:37.069669  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 17:51:37.069676  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:51:37.069713  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 17:51:37.069783  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:51:37.069809  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 17:51:37.069825  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:51:37.069864  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 17:51:37.069936  105708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.ha-794405-m03 san=[127.0.0.1 192.168.39.185 ha-794405-m03 localhost minikube]
	I0729 17:51:37.123476  105708 provision.go:177] copyRemoteCerts
	I0729 17:51:37.123537  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:51:37.123565  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.125942  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.126301  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.126333  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.126470  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.126672  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.126853  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.126985  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:51:37.208668  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:51:37.208731  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:51:37.232392  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:51:37.232463  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 17:51:37.257307  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:51:37.257370  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:51:37.281279  105708 provision.go:87] duration metric: took 217.645775ms to configureAuth
	I0729 17:51:37.281307  105708 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:51:37.281534  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:51:37.281623  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.285007  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.285479  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.285506  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.285699  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.285883  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.286059  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.286202  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.286423  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:37.286604  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:37.286635  105708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:51:37.554130  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:51:37.554162  105708 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:51:37.554172  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetURL
	I0729 17:51:37.555534  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Using libvirt version 6000000
	I0729 17:51:37.557565  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.558027  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.558054  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.558190  105708 main.go:141] libmachine: Docker is up and running!
	I0729 17:51:37.558202  105708 main.go:141] libmachine: Reticulating splines...
	I0729 17:51:37.558210  105708 client.go:171] duration metric: took 23.767149838s to LocalClient.Create
	I0729 17:51:37.558240  105708 start.go:167] duration metric: took 23.767212309s to libmachine.API.Create "ha-794405"
	I0729 17:51:37.558258  105708 start.go:293] postStartSetup for "ha-794405-m03" (driver="kvm2")
	I0729 17:51:37.558273  105708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:51:37.558293  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:37.558577  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:51:37.558609  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.561019  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.561387  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.561414  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.561589  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.561756  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.561897  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.562016  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:51:37.642877  105708 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:51:37.646900  105708 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:51:37.646923  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 17:51:37.646990  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 17:51:37.647083  105708 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 17:51:37.647094  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /etc/ssl/certs/952822.pem
	I0729 17:51:37.647196  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:51:37.656067  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:51:37.681469  105708 start.go:296] duration metric: took 123.19384ms for postStartSetup
	I0729 17:51:37.681525  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetConfigRaw
	I0729 17:51:37.682212  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:51:37.685029  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.685398  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.685419  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.685709  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:51:37.685928  105708 start.go:128] duration metric: took 23.914423367s to createHost
	I0729 17:51:37.685951  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.688346  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.688655  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.688684  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.688812  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.688991  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.689106  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.689289  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.689463  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:37.689659  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:37.689669  105708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:51:37.793585  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275497.769611233
	
	I0729 17:51:37.793609  105708 fix.go:216] guest clock: 1722275497.769611233
	I0729 17:51:37.793619  105708 fix.go:229] Guest: 2024-07-29 17:51:37.769611233 +0000 UTC Remote: 2024-07-29 17:51:37.685940461 +0000 UTC m=+154.895501561 (delta=83.670772ms)
	I0729 17:51:37.793642  105708 fix.go:200] guest clock delta is within tolerance: 83.670772ms
	I0729 17:51:37.793650  105708 start.go:83] releasing machines lock for "ha-794405-m03", held for 24.022296869s
	I0729 17:51:37.793674  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:37.793974  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:51:37.796625  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.797098  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.797127  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.799788  105708 out.go:177] * Found network options:
	I0729 17:51:37.801153  105708 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.62
	W0729 17:51:37.802278  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 17:51:37.802299  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:51:37.802315  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:37.802912  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:37.803108  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:37.803214  105708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:51:37.803250  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	W0729 17:51:37.803324  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 17:51:37.803346  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:51:37.803414  105708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:51:37.803431  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.806156  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.806537  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.806561  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.806581  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.806722  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.806896  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.807016  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.807041  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.807048  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.807187  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:51:37.807226  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.807385  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.807524  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.807688  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:51:38.034803  105708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:51:38.041903  105708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:51:38.041984  105708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:51:38.060208  105708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:51:38.060235  105708 start.go:495] detecting cgroup driver to use...
	I0729 17:51:38.060294  105708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:51:38.076360  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:51:38.089724  105708 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:51:38.089783  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:51:38.102853  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:51:38.116385  105708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:51:38.229756  105708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:51:38.404745  105708 docker.go:233] disabling docker service ...
	I0729 17:51:38.404834  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:51:38.419584  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:51:38.433372  105708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:51:38.544792  105708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:51:38.653054  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:51:38.667071  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:51:38.687105  105708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:51:38.687173  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.699331  105708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:51:38.699397  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.711428  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.722969  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.734580  105708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:51:38.746232  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.757995  105708 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.776224  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.788146  105708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:51:38.798705  105708 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:51:38.798757  105708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:51:38.811479  105708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:51:38.820984  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:51:38.941667  105708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:51:39.085748  105708 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:51:39.085852  105708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:51:39.091014  105708 start.go:563] Will wait 60s for crictl version
	I0729 17:51:39.091076  105708 ssh_runner.go:195] Run: which crictl
	I0729 17:51:39.095007  105708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:51:39.139907  105708 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:51:39.139989  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:51:39.168090  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:51:39.200299  105708 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:51:39.201714  105708 out.go:177]   - env NO_PROXY=192.168.39.102
	I0729 17:51:39.202982  105708 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.62
	I0729 17:51:39.204237  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:51:39.207379  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:39.207858  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:39.207897  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:39.208155  105708 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:51:39.213137  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:51:39.225413  105708 mustload.go:65] Loading cluster: ha-794405
	I0729 17:51:39.225634  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:51:39.225892  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:51:39.225934  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:51:39.241561  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43309
	I0729 17:51:39.241969  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:51:39.242481  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:51:39.242502  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:51:39.242835  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:51:39.243022  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:51:39.244548  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:51:39.244834  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:51:39.244891  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:51:39.259364  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0729 17:51:39.259878  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:51:39.260357  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:51:39.260378  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:51:39.260707  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:51:39.260915  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:51:39.261071  105708 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405 for IP: 192.168.39.185
	I0729 17:51:39.261084  105708 certs.go:194] generating shared ca certs ...
	I0729 17:51:39.261101  105708 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:51:39.261221  105708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 17:51:39.261269  105708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 17:51:39.261282  105708 certs.go:256] generating profile certs ...
	I0729 17:51:39.261387  105708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key
	I0729 17:51:39.261418  105708 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.0f64bc4c
	I0729 17:51:39.261438  105708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.0f64bc4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.62 192.168.39.185 192.168.39.254]
	I0729 17:51:39.370954  105708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.0f64bc4c ...
	I0729 17:51:39.370983  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.0f64bc4c: {Name:mk9ad2699a6f08d6feea0804a30182c285b135b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:51:39.371165  105708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.0f64bc4c ...
	I0729 17:51:39.371181  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.0f64bc4c: {Name:mk1edda8ff2e7a1dff1452cad9bc647746822586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:51:39.371289  105708 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.0f64bc4c -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt
	I0729 17:51:39.371449  105708 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.0f64bc4c -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key
	I0729 17:51:39.371619  105708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key
	I0729 17:51:39.371640  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:51:39.371658  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:51:39.371678  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:51:39.371695  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:51:39.371712  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:51:39.371727  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:51:39.371743  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:51:39.371761  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:51:39.371827  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 17:51:39.371868  105708 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 17:51:39.371881  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:51:39.371917  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:51:39.371948  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:51:39.371988  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 17:51:39.372044  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:51:39.372082  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem -> /usr/share/ca-certificates/95282.pem
	I0729 17:51:39.372108  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /usr/share/ca-certificates/952822.pem
	I0729 17:51:39.372123  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:39.372165  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:51:39.375170  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:51:39.375646  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:51:39.375674  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:51:39.375915  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:51:39.376114  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:51:39.376271  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:51:39.376402  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:51:39.449248  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 17:51:39.454254  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 17:51:39.465664  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 17:51:39.469745  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 17:51:39.482969  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 17:51:39.487408  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 17:51:39.500935  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 17:51:39.505908  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 17:51:39.516676  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 17:51:39.520797  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 17:51:39.530928  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 17:51:39.535723  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 17:51:39.546854  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:51:39.575157  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:51:39.602960  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:51:39.627624  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:51:39.654674  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 17:51:39.681302  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:51:39.706741  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:51:39.730706  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:51:39.753580  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 17:51:39.779188  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 17:51:39.805025  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:51:39.830566  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 17:51:39.848010  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 17:51:39.865383  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 17:51:39.882453  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 17:51:39.898993  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 17:51:39.914624  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 17:51:39.930487  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 17:51:39.946333  105708 ssh_runner.go:195] Run: openssl version
	I0729 17:51:39.951926  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 17:51:39.962653  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 17:51:39.967172  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 17:51:39.967217  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 17:51:39.973243  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:51:39.985022  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:51:39.995057  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:39.999521  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:39.999576  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:40.005332  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:51:40.015845  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 17:51:40.025936  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 17:51:40.030310  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 17:51:40.030361  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 17:51:40.036076  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 17:51:40.047264  105708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:51:40.051418  105708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:51:40.051478  105708 kubeadm.go:934] updating node {m03 192.168.39.185 8443 v1.30.3 crio true true} ...
	I0729 17:51:40.051600  105708 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-794405-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:51:40.051637  105708 kube-vip.go:115] generating kube-vip config ...
	I0729 17:51:40.051681  105708 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:51:40.067051  105708 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:51:40.067116  105708 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:51:40.067181  105708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:51:40.077259  105708 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 17:51:40.077323  105708 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 17:51:40.087388  105708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 17:51:40.087413  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:51:40.087455  105708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 17:51:40.087489  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:51:40.087496  105708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 17:51:40.087531  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:51:40.087506  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:51:40.087616  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:51:40.092281  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 17:51:40.092305  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 17:51:40.131874  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:51:40.131903  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 17:51:40.131927  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 17:51:40.131977  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:51:40.184392  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 17:51:40.184448  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 17:51:41.009843  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 17:51:41.019819  105708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 17:51:41.036516  105708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:51:41.053300  105708 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 17:51:41.070512  105708 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:51:41.075014  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:51:41.088092  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:51:41.226113  105708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:51:41.245974  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:51:41.246427  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:51:41.246487  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:51:41.262609  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0729 17:51:41.263056  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:51:41.263676  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:51:41.263704  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:51:41.264057  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:51:41.264285  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:51:41.264449  105708 start.go:317] joinCluster: &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:51:41.264625  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 17:51:41.264651  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:51:41.267557  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:51:41.268013  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:51:41.268047  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:51:41.268162  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:51:41.268342  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:51:41.268472  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:51:41.268607  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:51:41.440958  105708 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:51:41.441015  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t2ykit.l2mn21qacn94oqux --discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-794405-m03 --control-plane --apiserver-advertise-address=192.168.39.185 --apiserver-bind-port=8443"
	I0729 17:52:05.729150  105708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t2ykit.l2mn21qacn94oqux --discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-794405-m03 --control-plane --apiserver-advertise-address=192.168.39.185 --apiserver-bind-port=8443": (24.288102608s)
	I0729 17:52:05.729199  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 17:52:06.400473  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-794405-m03 minikube.k8s.io/updated_at=2024_07_29T17_52_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=ha-794405 minikube.k8s.io/primary=false
	I0729 17:52:06.547141  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-794405-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 17:52:06.684118  105708 start.go:319] duration metric: took 25.41966317s to joinCluster
	I0729 17:52:06.684219  105708 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:52:06.684723  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:52:06.685937  105708 out.go:177] * Verifying Kubernetes components...
	I0729 17:52:06.687299  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:52:07.001516  105708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:52:07.092644  105708 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:52:07.092905  105708 kapi.go:59] client config for ha-794405: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt", KeyFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key", CAFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 17:52:07.092977  105708 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0729 17:52:07.093351  105708 node_ready.go:35] waiting up to 6m0s for node "ha-794405-m03" to be "Ready" ...
	I0729 17:52:07.093460  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:07.093471  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:07.093481  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:07.093488  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:07.096691  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:07.593951  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:07.593975  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:07.593983  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:07.593987  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:07.597596  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:08.094137  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:08.094163  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:08.094174  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:08.094181  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:08.098001  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:08.594166  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:08.594193  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:08.594205  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:08.594210  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:08.597318  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:09.093727  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:09.093752  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:09.093758  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:09.093761  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:09.096800  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:09.097526  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:09.593931  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:09.593951  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:09.593959  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:09.593964  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:09.598145  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:52:10.093753  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:10.093779  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:10.093791  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:10.093801  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:10.098019  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:52:10.594395  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:10.594423  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:10.594434  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:10.594440  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:10.598134  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:11.094379  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:11.094407  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:11.094419  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:11.094425  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:11.098039  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:11.098742  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:11.594240  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:11.594271  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:11.594283  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:11.594291  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:11.597458  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:12.093653  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:12.093679  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:12.093689  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:12.093693  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:12.097391  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:12.593808  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:12.593835  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:12.593844  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:12.593848  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:12.597483  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:13.094127  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:13.094149  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:13.094156  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:13.094161  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:13.097539  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:13.594152  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:13.594180  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:13.594193  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:13.594197  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:13.600588  105708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 17:52:13.601209  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:14.093641  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:14.093663  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:14.093671  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:14.093680  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:14.096907  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:14.593508  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:14.593533  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:14.593543  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:14.593548  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:14.596723  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:15.093697  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:15.093720  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:15.093728  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:15.093732  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:15.097273  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:15.593620  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:15.593651  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:15.593663  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:15.593668  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:15.596952  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:16.093834  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:16.093858  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:16.093866  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:16.093870  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:16.097198  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:16.098052  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:16.593735  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:16.593758  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:16.593767  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:16.593772  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:16.596889  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:17.094160  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:17.094186  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:17.094197  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:17.094204  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:17.097538  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:17.594488  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:17.594515  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:17.594523  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:17.594526  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:17.597661  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:18.094116  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:18.094141  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:18.094151  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:18.094156  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:18.097888  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:18.098539  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:18.593933  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:18.593958  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:18.593971  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:18.593975  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:18.597907  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:19.094256  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:19.094288  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:19.094301  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:19.094306  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:19.098574  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:52:19.594100  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:19.594122  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:19.594130  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:19.594135  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:19.597121  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:20.094163  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:20.094185  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:20.094193  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:20.094199  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:20.099057  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:52:20.099921  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:20.594118  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:20.594140  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:20.594149  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:20.594154  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:20.597180  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:21.094340  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:21.094365  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:21.094374  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:21.094378  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:21.097640  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:21.594113  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:21.594136  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:21.594144  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:21.594147  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:21.597402  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.094481  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:22.094508  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.094518  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.094522  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.107733  105708 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 17:52:22.108412  105708 node_ready.go:49] node "ha-794405-m03" has status "Ready":"True"
	I0729 17:52:22.108441  105708 node_ready.go:38] duration metric: took 15.015062151s for node "ha-794405-m03" to be "Ready" ...
	I0729 17:52:22.108452  105708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:52:22.108533  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:52:22.108546  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.108556  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.108560  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.115703  105708 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 17:52:22.122388  105708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.122477  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bb2jg
	I0729 17:52:22.122486  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.122494  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.122497  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.125882  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.126777  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:22.126791  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.126798  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.126801  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.129232  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.129664  105708 pod_ready.go:92] pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.129681  105708 pod_ready.go:81] duration metric: took 7.267572ms for pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.129689  105708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.129737  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nzvff
	I0729 17:52:22.129744  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.129751  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.129756  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.133407  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.134013  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:22.134030  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.134037  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.134043  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.136873  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.137286  105708 pod_ready.go:92] pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.137305  105708 pod_ready.go:81] duration metric: took 7.608491ms for pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.137316  105708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.137369  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405
	I0729 17:52:22.137379  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.137389  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.137395  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.140251  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.141219  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:22.141232  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.141238  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.141244  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.144019  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.144818  105708 pod_ready.go:92] pod "etcd-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.144833  105708 pod_ready.go:81] duration metric: took 7.510577ms for pod "etcd-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.144840  105708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.144907  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:52:22.144917  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.144923  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.144927  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.147860  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.148905  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:22.148921  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.148931  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.148938  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.150970  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.151405  105708 pod_ready.go:92] pod "etcd-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.151423  105708 pod_ready.go:81] duration metric: took 6.576669ms for pod "etcd-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.151434  105708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.294790  105708 request.go:629] Waited for 143.290566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m03
	I0729 17:52:22.294876  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m03
	I0729 17:52:22.294887  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.294898  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.294907  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.297667  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.494604  105708 request.go:629] Waited for 196.288993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:22.494664  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:22.494669  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.494677  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.494682  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.498015  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.498640  105708 pod_ready.go:92] pod "etcd-ha-794405-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.498662  105708 pod_ready.go:81] duration metric: took 347.221622ms for pod "etcd-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.498685  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.694620  105708 request.go:629] Waited for 195.855925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405
	I0729 17:52:22.694692  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405
	I0729 17:52:22.694697  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.694704  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.694710  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.697741  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.894865  105708 request.go:629] Waited for 196.229078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:22.894930  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:22.894936  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.894948  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.894955  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.898028  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.898788  105708 pod_ready.go:92] pod "kube-apiserver-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.898807  105708 pod_ready.go:81] duration metric: took 400.109837ms for pod "kube-apiserver-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.898827  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:23.095419  105708 request.go:629] Waited for 196.501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405-m02
	I0729 17:52:23.095642  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405-m02
	I0729 17:52:23.095669  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:23.095681  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:23.095693  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:23.098878  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:23.294916  105708 request.go:629] Waited for 195.278918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:23.294979  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:23.294987  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:23.294996  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:23.295002  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:23.298687  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:23.299396  105708 pod_ready.go:92] pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:23.299426  105708 pod_ready.go:81] duration metric: took 400.589256ms for pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:23.299439  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:23.495317  105708 request.go:629] Waited for 195.767589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405-m03
	I0729 17:52:23.495395  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405-m03
	I0729 17:52:23.495405  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:23.495417  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:23.495425  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:23.499174  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:23.694651  105708 request.go:629] Waited for 193.281404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:23.694722  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:23.694727  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:23.694735  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:23.694740  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:23.698483  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:23.699565  105708 pod_ready.go:92] pod "kube-apiserver-ha-794405-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:23.699585  105708 pod_ready.go:81] duration metric: took 400.13736ms for pod "kube-apiserver-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:23.699601  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:23.895283  105708 request.go:629] Waited for 195.596381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405
	I0729 17:52:23.895360  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405
	I0729 17:52:23.895366  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:23.895374  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:23.895378  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:23.898525  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:24.094774  105708 request.go:629] Waited for 195.35988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:24.094846  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:24.094855  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:24.094865  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:24.094876  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:24.097820  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:24.098509  105708 pod_ready.go:92] pod "kube-controller-manager-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:24.098528  105708 pod_ready.go:81] duration metric: took 398.913833ms for pod "kube-controller-manager-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:24.098538  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:24.294502  105708 request.go:629] Waited for 195.889611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m02
	I0729 17:52:24.294562  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m02
	I0729 17:52:24.294567  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:24.294574  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:24.294582  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:24.297602  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:24.494783  105708 request.go:629] Waited for 196.364553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:24.494844  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:24.494849  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:24.494857  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:24.494862  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:24.498051  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:24.498652  105708 pod_ready.go:92] pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:24.498678  105708 pod_ready.go:81] duration metric: took 400.133287ms for pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:24.498694  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:24.694575  105708 request.go:629] Waited for 195.792594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m03
	I0729 17:52:24.694669  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m03
	I0729 17:52:24.694678  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:24.694689  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:24.694698  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:24.698084  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:24.895177  105708 request.go:629] Waited for 196.401878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:24.895252  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:24.895263  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:24.895301  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:24.895310  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:24.898701  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:24.899355  105708 pod_ready.go:92] pod "kube-controller-manager-ha-794405-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:24.899374  105708 pod_ready.go:81] duration metric: took 400.671302ms for pod "kube-controller-manager-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:24.899383  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-llkz8" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:25.095483  105708 request.go:629] Waited for 196.033676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llkz8
	I0729 17:52:25.095585  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llkz8
	I0729 17:52:25.095596  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:25.095607  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:25.095613  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:25.098769  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:25.294943  105708 request.go:629] Waited for 195.360516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:25.295029  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:25.295034  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:25.295042  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:25.295049  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:25.297909  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:25.298495  105708 pod_ready.go:92] pod "kube-proxy-llkz8" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:25.298518  105708 pod_ready.go:81] duration metric: took 399.128803ms for pod "kube-proxy-llkz8" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:25.298527  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ndmlm" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:25.494555  105708 request.go:629] Waited for 195.94168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndmlm
	I0729 17:52:25.494659  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndmlm
	I0729 17:52:25.494666  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:25.494674  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:25.494678  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:25.498225  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:25.695461  105708 request.go:629] Waited for 196.323528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:25.695517  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:25.695521  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:25.695529  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:25.695534  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:25.698829  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:25.699491  105708 pod_ready.go:92] pod "kube-proxy-ndmlm" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:25.699512  105708 pod_ready.go:81] duration metric: took 400.977802ms for pod "kube-proxy-ndmlm" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:25.699524  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qcmxl" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:25.894477  105708 request.go:629] Waited for 194.854751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qcmxl
	I0729 17:52:25.894569  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qcmxl
	I0729 17:52:25.894612  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:25.894623  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:25.894629  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:25.898150  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:26.095284  105708 request.go:629] Waited for 196.396948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:26.095358  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:26.095366  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:26.095377  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:26.095388  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:26.098864  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:26.099520  105708 pod_ready.go:92] pod "kube-proxy-qcmxl" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:26.099556  105708 pod_ready.go:81] duration metric: took 400.017239ms for pod "kube-proxy-qcmxl" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:26.099565  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:26.295195  105708 request.go:629] Waited for 195.560076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405
	I0729 17:52:26.295273  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405
	I0729 17:52:26.295280  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:26.295288  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:26.295293  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:26.298472  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:26.494543  105708 request.go:629] Waited for 195.281031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:26.494623  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:26.494632  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:26.494642  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:26.494647  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:26.498204  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:26.498710  105708 pod_ready.go:92] pod "kube-scheduler-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:26.498732  105708 pod_ready.go:81] duration metric: took 399.158818ms for pod "kube-scheduler-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:26.498746  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:26.694837  105708 request.go:629] Waited for 195.973722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m02
	I0729 17:52:26.694908  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m02
	I0729 17:52:26.694915  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:26.694925  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:26.694932  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:26.698462  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:26.895254  105708 request.go:629] Waited for 195.851427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:26.895307  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:26.895314  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:26.895324  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:26.895331  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:26.899943  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:52:26.900594  105708 pod_ready.go:92] pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:26.900616  105708 pod_ready.go:81] duration metric: took 401.864196ms for pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:26.900626  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:27.095062  105708 request.go:629] Waited for 194.356554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m03
	I0729 17:52:27.095119  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m03
	I0729 17:52:27.095124  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.095132  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.095138  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.098295  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:27.295286  105708 request.go:629] Waited for 196.364582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:27.295340  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:27.295345  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.295352  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.295356  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.298568  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:27.299084  105708 pod_ready.go:92] pod "kube-scheduler-ha-794405-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:27.299104  105708 pod_ready.go:81] duration metric: took 398.469732ms for pod "kube-scheduler-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:27.299114  105708 pod_ready.go:38] duration metric: took 5.190649862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:52:27.299130  105708 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:52:27.299188  105708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:52:27.316096  105708 api_server.go:72] duration metric: took 20.631831701s to wait for apiserver process to appear ...
	I0729 17:52:27.316122  105708 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:52:27.316146  105708 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0729 17:52:27.320502  105708 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0729 17:52:27.320588  105708 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0729 17:52:27.320599  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.320609  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.320622  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.321551  105708 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 17:52:27.321626  105708 api_server.go:141] control plane version: v1.30.3
	I0729 17:52:27.321645  105708 api_server.go:131] duration metric: took 5.514184ms to wait for apiserver health ...
	I0729 17:52:27.321656  105708 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 17:52:27.495031  105708 request.go:629] Waited for 173.277349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:52:27.495091  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:52:27.495096  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.495103  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.495109  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.503688  105708 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 17:52:27.509944  105708 system_pods.go:59] 24 kube-system pods found
	I0729 17:52:27.509972  105708 system_pods.go:61] "coredns-7db6d8ff4d-bb2jg" [ee9ad335-25b2-4e6c-a523-47b06ce713dc] Running
	I0729 17:52:27.509976  105708 system_pods.go:61] "coredns-7db6d8ff4d-nzvff" [b1e2c116-2549-4e1a-8d79-cd86595db9f3] Running
	I0729 17:52:27.509980  105708 system_pods.go:61] "etcd-ha-794405" [95b13c7f-e6bf-4225-a00b-de1fefb711ac] Running
	I0729 17:52:27.509984  105708 system_pods.go:61] "etcd-ha-794405-m02" [5c99a8df-66da-45b0-b62a-30e17113a35d] Running
	I0729 17:52:27.509987  105708 system_pods.go:61] "etcd-ha-794405-m03" [96db3933-6f55-4e09-8d3b-8e5ea049e182] Running
	I0729 17:52:27.509992  105708 system_pods.go:61] "kindnet-8qgq5" [0b2b707a-4283-41cd-9f3b-83b6f2d169cf] Running
	I0729 17:52:27.509996  105708 system_pods.go:61] "kindnet-g2qqp" [c4a0c764-368c-4059-be5b-ff49aa48f5af] Running
	I0729 17:52:27.510001  105708 system_pods.go:61] "kindnet-j4l89" [c0b81d74-531b-4878-84ea-654e7b57f0ba] Running
	I0729 17:52:27.510005  105708 system_pods.go:61] "kube-apiserver-ha-794405" [32e07436-25ae-4f51-b0e6-004ed954d8b7] Running
	I0729 17:52:27.510013  105708 system_pods.go:61] "kube-apiserver-ha-794405-m02" [0e7afc5b-62cf-4874-ade9-1df7a6c3926d] Running
	I0729 17:52:27.510018  105708 system_pods.go:61] "kube-apiserver-ha-794405-m03" [f4e70efe-e9bb-4157-9bdc-c69c621a4a9f] Running
	I0729 17:52:27.510024  105708 system_pods.go:61] "kube-controller-manager-ha-794405" [c2d30a4e-d2bc-4221-b8d9-98b67932e4ab] Running
	I0729 17:52:27.510031  105708 system_pods.go:61] "kube-controller-manager-ha-794405-m02" [88e040cd-4d7f-4a2e-97cb-4f24249d1c82] Running
	I0729 17:52:27.510039  105708 system_pods.go:61] "kube-controller-manager-ha-794405-m03" [bc163b01-3b26-4102-99c7-57070c064741] Running
	I0729 17:52:27.510043  105708 system_pods.go:61] "kube-proxy-llkz8" [95536eff-3f12-4a7e-9504-c8f6b1acc4cb] Running
	I0729 17:52:27.510047  105708 system_pods.go:61] "kube-proxy-ndmlm" [e49d3ffa-561a-4fee-9438-79bd64eaa77e] Running
	I0729 17:52:27.510050  105708 system_pods.go:61] "kube-proxy-qcmxl" [963fc8f8-3080-4602-9437-d2060c7ea622] Running
	I0729 17:52:27.510053  105708 system_pods.go:61] "kube-scheduler-ha-794405" [b41eb8ec-bf30-4cc6-b454-28305ccf70b5] Running
	I0729 17:52:27.510058  105708 system_pods.go:61] "kube-scheduler-ha-794405-m02" [8a0bfe0d-b80a-4799-8371-84041938bf1d] Running
	I0729 17:52:27.510061  105708 system_pods.go:61] "kube-scheduler-ha-794405-m03" [a04e274d-fa85-48c1-b346-5abc439b1caa] Running
	I0729 17:52:27.510064  105708 system_pods.go:61] "kube-vip-ha-794405" [0e782ab8-0d52-4894-b003-493294ab4710] Running
	I0729 17:52:27.510067  105708 system_pods.go:61] "kube-vip-ha-794405-m02" [f7e2f40a-28ab-4655-a82c-11df8cf806d5] Running
	I0729 17:52:27.510072  105708 system_pods.go:61] "kube-vip-ha-794405-m03" [c6cf8681-5029-4139-b6f5-9c72e1a186a7] Running
	I0729 17:52:27.510075  105708 system_pods.go:61] "storage-provisioner" [0e08d093-f8b5-4614-9be2-5832f7cafa75] Running
	I0729 17:52:27.510080  105708 system_pods.go:74] duration metric: took 188.415985ms to wait for pod list to return data ...
	I0729 17:52:27.510089  105708 default_sa.go:34] waiting for default service account to be created ...
	I0729 17:52:27.695511  105708 request.go:629] Waited for 185.340573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:52:27.695572  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:52:27.695577  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.695585  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.695589  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.698868  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:27.699001  105708 default_sa.go:45] found service account: "default"
	I0729 17:52:27.699016  105708 default_sa.go:55] duration metric: took 188.920373ms for default service account to be created ...
	I0729 17:52:27.699025  105708 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 17:52:27.895459  105708 request.go:629] Waited for 196.359512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:52:27.895551  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:52:27.895559  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.895567  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.895571  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.902023  105708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 17:52:27.908310  105708 system_pods.go:86] 24 kube-system pods found
	I0729 17:52:27.908337  105708 system_pods.go:89] "coredns-7db6d8ff4d-bb2jg" [ee9ad335-25b2-4e6c-a523-47b06ce713dc] Running
	I0729 17:52:27.908343  105708 system_pods.go:89] "coredns-7db6d8ff4d-nzvff" [b1e2c116-2549-4e1a-8d79-cd86595db9f3] Running
	I0729 17:52:27.908347  105708 system_pods.go:89] "etcd-ha-794405" [95b13c7f-e6bf-4225-a00b-de1fefb711ac] Running
	I0729 17:52:27.908352  105708 system_pods.go:89] "etcd-ha-794405-m02" [5c99a8df-66da-45b0-b62a-30e17113a35d] Running
	I0729 17:52:27.908356  105708 system_pods.go:89] "etcd-ha-794405-m03" [96db3933-6f55-4e09-8d3b-8e5ea049e182] Running
	I0729 17:52:27.908360  105708 system_pods.go:89] "kindnet-8qgq5" [0b2b707a-4283-41cd-9f3b-83b6f2d169cf] Running
	I0729 17:52:27.908364  105708 system_pods.go:89] "kindnet-g2qqp" [c4a0c764-368c-4059-be5b-ff49aa48f5af] Running
	I0729 17:52:27.908368  105708 system_pods.go:89] "kindnet-j4l89" [c0b81d74-531b-4878-84ea-654e7b57f0ba] Running
	I0729 17:52:27.908372  105708 system_pods.go:89] "kube-apiserver-ha-794405" [32e07436-25ae-4f51-b0e6-004ed954d8b7] Running
	I0729 17:52:27.908377  105708 system_pods.go:89] "kube-apiserver-ha-794405-m02" [0e7afc5b-62cf-4874-ade9-1df7a6c3926d] Running
	I0729 17:52:27.908381  105708 system_pods.go:89] "kube-apiserver-ha-794405-m03" [f4e70efe-e9bb-4157-9bdc-c69c621a4a9f] Running
	I0729 17:52:27.908386  105708 system_pods.go:89] "kube-controller-manager-ha-794405" [c2d30a4e-d2bc-4221-b8d9-98b67932e4ab] Running
	I0729 17:52:27.908390  105708 system_pods.go:89] "kube-controller-manager-ha-794405-m02" [88e040cd-4d7f-4a2e-97cb-4f24249d1c82] Running
	I0729 17:52:27.908394  105708 system_pods.go:89] "kube-controller-manager-ha-794405-m03" [bc163b01-3b26-4102-99c7-57070c064741] Running
	I0729 17:52:27.908398  105708 system_pods.go:89] "kube-proxy-llkz8" [95536eff-3f12-4a7e-9504-c8f6b1acc4cb] Running
	I0729 17:52:27.908402  105708 system_pods.go:89] "kube-proxy-ndmlm" [e49d3ffa-561a-4fee-9438-79bd64eaa77e] Running
	I0729 17:52:27.908409  105708 system_pods.go:89] "kube-proxy-qcmxl" [963fc8f8-3080-4602-9437-d2060c7ea622] Running
	I0729 17:52:27.908413  105708 system_pods.go:89] "kube-scheduler-ha-794405" [b41eb8ec-bf30-4cc6-b454-28305ccf70b5] Running
	I0729 17:52:27.908416  105708 system_pods.go:89] "kube-scheduler-ha-794405-m02" [8a0bfe0d-b80a-4799-8371-84041938bf1d] Running
	I0729 17:52:27.908420  105708 system_pods.go:89] "kube-scheduler-ha-794405-m03" [a04e274d-fa85-48c1-b346-5abc439b1caa] Running
	I0729 17:52:27.908424  105708 system_pods.go:89] "kube-vip-ha-794405" [0e782ab8-0d52-4894-b003-493294ab4710] Running
	I0729 17:52:27.908427  105708 system_pods.go:89] "kube-vip-ha-794405-m02" [f7e2f40a-28ab-4655-a82c-11df8cf806d5] Running
	I0729 17:52:27.908430  105708 system_pods.go:89] "kube-vip-ha-794405-m03" [c6cf8681-5029-4139-b6f5-9c72e1a186a7] Running
	I0729 17:52:27.908434  105708 system_pods.go:89] "storage-provisioner" [0e08d093-f8b5-4614-9be2-5832f7cafa75] Running
	I0729 17:52:27.908440  105708 system_pods.go:126] duration metric: took 209.410233ms to wait for k8s-apps to be running ...
	I0729 17:52:27.908451  105708 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 17:52:27.908496  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:52:27.924491  105708 system_svc.go:56] duration metric: took 16.032013ms WaitForService to wait for kubelet
	I0729 17:52:27.924520  105708 kubeadm.go:582] duration metric: took 21.240258453s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:52:27.924538  105708 node_conditions.go:102] verifying NodePressure condition ...
	I0729 17:52:28.095243  105708 request.go:629] Waited for 170.622148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0729 17:52:28.095344  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0729 17:52:28.095362  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:28.095373  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:28.095383  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:28.098922  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:28.100208  105708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:52:28.100233  105708 node_conditions.go:123] node cpu capacity is 2
	I0729 17:52:28.100244  105708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:52:28.100248  105708 node_conditions.go:123] node cpu capacity is 2
	I0729 17:52:28.100251  105708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:52:28.100254  105708 node_conditions.go:123] node cpu capacity is 2
	I0729 17:52:28.100258  105708 node_conditions.go:105] duration metric: took 175.716329ms to run NodePressure ...
	I0729 17:52:28.100269  105708 start.go:241] waiting for startup goroutines ...
	I0729 17:52:28.100289  105708 start.go:255] writing updated cluster config ...
	I0729 17:52:28.100595  105708 ssh_runner.go:195] Run: rm -f paused
	I0729 17:52:28.154674  105708 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 17:52:28.156740  105708 out.go:177] * Done! kubectl is now configured to use "ha-794405" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.405463623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78b85a95-2d47-433e-8b43-a169c1b556ac name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.406988326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff6307c1-8d40-4f8e-afab-19a9e5696bb3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.407562305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275763407540513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff6307c1-8d40-4f8e-afab-19a9e5696bb3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.408138197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5155c801-31e3-4bb9-b9ff-7284825bd717 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.408206180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5155c801-31e3-4bb9-b9ff-7284825bd717 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.408503136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275551524991054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cd9159e204634d6ac3d51e419d539364b61d6e875ff4a12637862c389aadc97,PodSandboxId:240fbb16ebb18d19e584558e2991c6aa9018969c0c29e9cd1972b1748d356979,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275416635155029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416642459474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416592553577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25
b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722275404641520344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227540
1434014276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7e5300596ed794752e31ceb8cb03e339b3ea52305f02c83e577378000130f,PodSandboxId:7510b2d9ade47876437c644891b0e0b0f683f3900c72a8708be8436cad9710de,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222753829
69924866,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d374c3f980522c4e4148a3ee91a62ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a,PodSandboxId:65aead88c888b23fc47036ea29d01bdcd48fa0b7b69073b480b5ea4ab84eef2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275381102711071,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e,PodSandboxId:2d88c70ad1fa5ef38fb6bebafbf2c938058f1611d73b528edfcef167ecc0db56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275381088832898,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275381004903660,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275380962157338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5155c801-31e3-4bb9-b9ff-7284825bd717 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.455426057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6302457-0301-415e-8869-31df67fefd99 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.455512547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6302457-0301-415e-8869-31df67fefd99 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.456804326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6c730d7-3f37-4edd-81de-defa6f83385b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.457296830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275763457272937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6c730d7-3f37-4edd-81de-defa6f83385b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.458251805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae9789b4-e972-475f-b033-22104139841a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.458305208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae9789b4-e972-475f-b033-22104139841a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.458653507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275551524991054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cd9159e204634d6ac3d51e419d539364b61d6e875ff4a12637862c389aadc97,PodSandboxId:240fbb16ebb18d19e584558e2991c6aa9018969c0c29e9cd1972b1748d356979,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275416635155029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416642459474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416592553577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25
b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722275404641520344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227540
1434014276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7e5300596ed794752e31ceb8cb03e339b3ea52305f02c83e577378000130f,PodSandboxId:7510b2d9ade47876437c644891b0e0b0f683f3900c72a8708be8436cad9710de,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222753829
69924866,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d374c3f980522c4e4148a3ee91a62ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a,PodSandboxId:65aead88c888b23fc47036ea29d01bdcd48fa0b7b69073b480b5ea4ab84eef2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275381102711071,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e,PodSandboxId:2d88c70ad1fa5ef38fb6bebafbf2c938058f1611d73b528edfcef167ecc0db56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275381088832898,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275381004903660,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275380962157338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae9789b4-e972-475f-b033-22104139841a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.490676965Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=390f7dbf-0154-4f97-8e59-5bd6dc2e6bae name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.490974615Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-9t4xg,Uid:ceb96a8b-de79-4d8b-a767-8e61b163b088,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722275550477868693,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:52:30.166682278Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bb2jg,Uid:ee9ad335-25b2-4e6c-a523-47b06ce713dc,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1722275416399914647,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:50:16.085962318Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzvff,Uid:b1e2c116-2549-4e1a-8d79-cd86595db9f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722275416391227744,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-07-29T17:50:16.075146896Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:240fbb16ebb18d19e584558e2991c6aa9018969c0c29e9cd1972b1748d356979,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0e08d093-f8b5-4614-9be2-5832f7cafa75,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722275416388566748,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T17:50:16.083259304Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&PodSandboxMetadata{Name:kindnet-j4l89,Uid:c0b81d74-531b-4878-84ea-654e7b57f0ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722275401125572832,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:50:00.807579408Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&PodSandboxMetadata{Name:kube-proxy-llkz8,Uid:95536eff-3f12-4a7e-9504-c8f6b1acc4cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722275401120831741,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T17:50:00.803133866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2d88c70ad1fa5ef38fb6bebafbf2c938058f1611d73b528edfcef167ecc0db56,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-794405,Uid:5110118fe5cf51b6a61d9f9785be3c3c,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1722275380801046586,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.102:8443,kubernetes.io/config.hash: 5110118fe5cf51b6a61d9f9785be3c3c,kubernetes.io/config.seen: 2024-07-29T17:49:40.316840571Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7510b2d9ade47876437c644891b0e0b0f683f3900c72a8708be8436cad9710de,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-794405,Uid:7d374c3f980522c4e4148a3ee91a62ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722275380795567768,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d374c3f980
522c4e4148a3ee91a62ea,},Annotations:map[string]string{kubernetes.io/config.hash: 7d374c3f980522c4e4148a3ee91a62ea,kubernetes.io/config.seen: 2024-07-29T17:49:40.316844348Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-794405,Uid:c874b85b6752de4391e8b92749861ca9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722275380786101504,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c874b85b6752de4391e8b92749861ca9,kubernetes.io/config.seen: 2024-07-29T17:49:40.316843192Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:65aead88c888b23fc47036ea29d01bdcd48fa0b7b69073b480b5ea4ab84eef2a,Met
adata:&PodSandboxMetadata{Name:kube-controller-manager-ha-794405,Uid:6e530a81d1e9a9e39055d59309a089fd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722275380782097448,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6e530a81d1e9a9e39055d59309a089fd,kubernetes.io/config.seen: 2024-07-29T17:49:40.316841913Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&PodSandboxMetadata{Name:etcd-ha-794405,Uid:d3d262369b7075ef1593bfc8c891dbcd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722275380775278570,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-794405,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.102:2379,kubernetes.io/config.hash: d3d262369b7075ef1593bfc8c891dbcd,kubernetes.io/config.seen: 2024-07-29T17:49:40.316836726Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=390f7dbf-0154-4f97-8e59-5bd6dc2e6bae name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.491746863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dba28de9-058c-402a-869b-30619ee31c6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.491819874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dba28de9-058c-402a-869b-30619ee31c6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.492053347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275551524991054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cd9159e204634d6ac3d51e419d539364b61d6e875ff4a12637862c389aadc97,PodSandboxId:240fbb16ebb18d19e584558e2991c6aa9018969c0c29e9cd1972b1748d356979,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275416635155029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416642459474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416592553577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25
b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722275404641520344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227540
1434014276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7e5300596ed794752e31ceb8cb03e339b3ea52305f02c83e577378000130f,PodSandboxId:7510b2d9ade47876437c644891b0e0b0f683f3900c72a8708be8436cad9710de,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222753829
69924866,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d374c3f980522c4e4148a3ee91a62ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a,PodSandboxId:65aead88c888b23fc47036ea29d01bdcd48fa0b7b69073b480b5ea4ab84eef2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275381102711071,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e,PodSandboxId:2d88c70ad1fa5ef38fb6bebafbf2c938058f1611d73b528edfcef167ecc0db56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275381088832898,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275381004903660,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275380962157338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dba28de9-058c-402a-869b-30619ee31c6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.503328745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95752d2e-0618-4792-bcd7-fe73a9f80e1d name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.503438436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95752d2e-0618-4792-bcd7-fe73a9f80e1d name=/runtime.v1.RuntimeService/Version
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.504288993Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61c73e12-f1e1-450f-9016-b61356a2b5ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.504825780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275763504804809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61c73e12-f1e1-450f-9016-b61356a2b5ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.505331983Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19cbe5ba-ec5a-4649-818d-7ebbc9b5cea9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.505437982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19cbe5ba-ec5a-4649-818d-7ebbc9b5cea9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:56:03 ha-794405 crio[683]: time="2024-07-29 17:56:03.505666555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275551524991054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cd9159e204634d6ac3d51e419d539364b61d6e875ff4a12637862c389aadc97,PodSandboxId:240fbb16ebb18d19e584558e2991c6aa9018969c0c29e9cd1972b1748d356979,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275416635155029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416642459474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416592553577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25
b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722275404641520344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227540
1434014276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7e5300596ed794752e31ceb8cb03e339b3ea52305f02c83e577378000130f,PodSandboxId:7510b2d9ade47876437c644891b0e0b0f683f3900c72a8708be8436cad9710de,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222753829
69924866,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d374c3f980522c4e4148a3ee91a62ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a,PodSandboxId:65aead88c888b23fc47036ea29d01bdcd48fa0b7b69073b480b5ea4ab84eef2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275381102711071,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e,PodSandboxId:2d88c70ad1fa5ef38fb6bebafbf2c938058f1611d73b528edfcef167ecc0db56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275381088832898,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275381004903660,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275380962157338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19cbe5ba-ec5a-4649-818d-7ebbc9b5cea9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	882dc7ddd36ca       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   030fd183fc5d7       busybox-fc5497c4f-9t4xg
	34646ba311f51       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   0a85f31b7216e       coredns-7db6d8ff4d-nzvff
	9cd9159e20463       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   240fbb16ebb18       storage-provisioner
	11e098645d7d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   c21b66fe5a20a       coredns-7db6d8ff4d-bb2jg
	5005f4869048e       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    5 minutes ago       Running             kindnet-cni               0                   a04c14b520cac       kindnet-j4l89
	2992a8242c5e7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   afea598394fc6       kube-proxy-llkz8
	83c7e5300596e       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   7510b2d9ade47       kube-vip-ha-794405
	152a9fa24ee44       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   65aead88c888b       kube-controller-manager-ha-794405
	985c673864e1a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   2d88c70ad1fa5       kube-apiserver-ha-794405
	fca3429715988       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   da888d4d893d6       kube-scheduler-ha-794405
	e224997d35927       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   a93bf9947672a       etcd-ha-794405
	
	
	==> coredns [11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d] <==
	[INFO] 10.244.1.2:40259 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000163059s
	[INFO] 10.244.2.2:53496 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193518s
	[INFO] 10.244.2.2:55534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010566736s
	[INFO] 10.244.2.2:40585 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234026s
	[INFO] 10.244.2.2:49780 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172106s
	[INFO] 10.244.0.4:57455 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010734s
	[INFO] 10.244.0.4:49757 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134817s
	[INFO] 10.244.0.4:34537 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083091s
	[INFO] 10.244.0.4:59243 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094884s
	[INFO] 10.244.0.4:32813 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194094s
	[INFO] 10.244.1.2:51380 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001717695s
	[INFO] 10.244.1.2:41977 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084863s
	[INFO] 10.244.1.2:45990 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090641s
	[INFO] 10.244.1.2:55905 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128239s
	[INFO] 10.244.1.2:57839 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092047s
	[INFO] 10.244.0.4:52553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155036s
	[INFO] 10.244.0.4:60833 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116165s
	[INFO] 10.244.0.4:58984 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096169s
	[INFO] 10.244.1.2:56581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099926s
	[INFO] 10.244.2.2:47299 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251364s
	[INFO] 10.244.2.2:54140 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131767s
	[INFO] 10.244.0.4:37906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128168s
	[INFO] 10.244.0.4:53897 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128545s
	[INFO] 10.244.0.4:42232 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175859s
	[INFO] 10.244.1.2:58375 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000225865s
	
	
	==> coredns [34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca] <==
	[INFO] 10.244.1.2:54634 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001797843s
	[INFO] 10.244.2.2:47599 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241341s
	[INFO] 10.244.2.2:54826 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003913926s
	[INFO] 10.244.2.2:38410 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162546s
	[INFO] 10.244.2.2:58834 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129002s
	[INFO] 10.244.0.4:49557 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090727s
	[INFO] 10.244.0.4:33820 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001835803s
	[INFO] 10.244.0.4:39762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001456019s
	[INFO] 10.244.1.2:49407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010484s
	[INFO] 10.244.1.2:41901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153055s
	[INFO] 10.244.1.2:46891 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001271955s
	[INFO] 10.244.2.2:49560 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127808s
	[INFO] 10.244.2.2:56119 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007809s
	[INFO] 10.244.2.2:38291 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002272s
	[INFO] 10.244.2.2:47373 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074396s
	[INFO] 10.244.0.4:48660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051359s
	[INFO] 10.244.1.2:45618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016309s
	[INFO] 10.244.1.2:34022 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090959s
	[INFO] 10.244.1.2:55925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000187604s
	[INFO] 10.244.2.2:52948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132206s
	[INFO] 10.244.2.2:50512 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133066s
	[INFO] 10.244.0.4:56090 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011653s
	[INFO] 10.244.1.2:53420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109055s
	[INFO] 10.244.1.2:51296 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101897s
	[INFO] 10.244.1.2:36056 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072778s
	
	
	==> describe nodes <==
	Name:               ha-794405
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_49_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:49:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:55:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:52:50 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:52:50 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:52:50 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:52:50 +0000   Mon, 29 Jul 2024 17:50:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-794405
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f5d049fcd1645d38ff56c6e587d83f8
	  System UUID:                4f5d049f-cd16-45d3-8ff5-6c6e587d83f8
	  Boot ID:                    a36bbb12-7ddf-423d-b68c-d781a4b4af75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9t4xg              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 coredns-7db6d8ff4d-bb2jg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m3s
	  kube-system                 coredns-7db6d8ff4d-nzvff             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m3s
	  kube-system                 etcd-ha-794405                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m16s
	  kube-system                 kindnet-j4l89                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m3s
	  kube-system                 kube-apiserver-ha-794405             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 kube-controller-manager-ha-794405    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 kube-proxy-llkz8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-scheduler-ha-794405             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 kube-vip-ha-794405                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m1s   kube-proxy       
	  Normal  Starting                 6m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m16s  kubelet          Node ha-794405 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m16s  kubelet          Node ha-794405 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s  kubelet          Node ha-794405 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m3s   node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal  NodeReady                5m47s  kubelet          Node ha-794405 status is now: NodeReady
	  Normal  RegisteredNode           4m55s  node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal  RegisteredNode           3m43s  node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	
	
	Name:               ha-794405-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_50_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:50:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:53:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 17:52:52 +0000   Mon, 29 Jul 2024 17:54:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 17:52:52 +0000   Mon, 29 Jul 2024 17:54:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 17:52:52 +0000   Mon, 29 Jul 2024 17:54:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 17:52:52 +0000   Mon, 29 Jul 2024 17:54:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-794405-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 437dda8ebd384bf294c14831928d98f5
	  System UUID:                437dda8e-bd38-4bf2-94c1-4831928d98f5
	  Boot ID:                    8dac2304-3043-4420-be7b-4720ee3f4a37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kq6g2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 etcd-ha-794405-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m13s
	  kube-system                 kindnet-8qgq5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m14s
	  kube-system                 kube-apiserver-ha-794405-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-controller-manager-ha-794405-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-proxy-qcmxl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-scheduler-ha-794405-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-vip-ha-794405-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m15s)  kubelet          Node ha-794405-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m15s)  kubelet          Node ha-794405-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m15s)  kubelet          Node ha-794405-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  NodeNotReady             98s                    node-controller  Node ha-794405-m02 status is now: NodeNotReady
	
	
	Name:               ha-794405-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_52_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:52:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:55:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:52:32 +0000   Mon, 29 Jul 2024 17:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:52:32 +0000   Mon, 29 Jul 2024 17:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:52:32 +0000   Mon, 29 Jul 2024 17:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:52:32 +0000   Mon, 29 Jul 2024 17:52:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-794405-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7788bd32e72d421d86476277253535d2
	  System UUID:                7788bd32-e72d-421d-8647-6277253535d2
	  Boot ID:                    99ed6d55-8112-4d56-83c8-983b813fa1bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8xr2r                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 etcd-ha-794405-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m
	  kube-system                 kindnet-g2qqp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-ha-794405-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-controller-manager-ha-794405-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-proxy-ndmlm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-ha-794405-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-vip-ha-794405-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m2s (x8 over 4m2s)  kubelet          Node ha-794405-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x8 over 4m2s)  kubelet          Node ha-794405-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x7 over 4m2s)  kubelet          Node ha-794405-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	  Normal  RegisteredNode           3m43s                node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	
	
	Name:               ha-794405-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_53_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:55:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:53:35 +0000   Mon, 29 Jul 2024 17:53:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:53:35 +0000   Mon, 29 Jul 2024 17:53:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:53:35 +0000   Mon, 29 Jul 2024 17:53:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:53:35 +0000   Mon, 29 Jul 2024 17:53:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    ha-794405-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2eee0b726b504b318de9dcda1a6d7202
	  System UUID:                2eee0b72-6b50-4b31-8de9-dcda1a6d7202
	  Boot ID:                    8afc220b-a697-4b20-991b-858204b503d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ndgvz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m59s
	  kube-system                 kube-proxy-nrw9z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m59s (x2 over 2m59s)  kubelet          Node ha-794405-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s (x2 over 2m59s)  kubelet          Node ha-794405-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x2 over 2m59s)  kubelet          Node ha-794405-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                  node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal  RegisteredNode           2m58s                  node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal  RegisteredNode           2m55s                  node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal  NodeReady                2m40s                  kubelet          Node ha-794405-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 17:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049871] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040156] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.724474] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.475771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.618211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.653696] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.053781] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058152] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.186373] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.123683] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.267498] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.093512] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.553872] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.061033] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.996135] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.105049] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[Jul29 17:50] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.275633] kauditd_printk_skb: 38 callbacks suppressed
	[ +40.101588] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e] <==
	{"level":"warn","ts":"2024-07-29T17:56:03.751184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.783978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.795485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.79946Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.810228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.818119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.824846Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.828451Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.831878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.841889Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.847876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.852473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.854525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.863887Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.867109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.880535Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.887459Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.891027Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.893907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.899114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.90201Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.62:2380/version","remote-member-id":"6da3c9e913621171","error":"Get \"https://192.168.39.62:2380/version\": dial tcp 192.168.39.62:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-29T17:56:03.902204Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6da3c9e913621171","error":"Get \"https://192.168.39.62:2380/version\": dial tcp 192.168.39.62:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-29T17:56:03.906663Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.914291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:56:03.951133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:56:03 up 6 min,  0 users,  load average: 0.19, 0.28, 0.17
	Linux ha-794405 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5] <==
	I0729 17:55:25.714754       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:55:35.714461       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 17:55:35.714578       1 main.go:299] handling current node
	I0729 17:55:35.714607       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:55:35.714628       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:55:35.714774       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:55:35.714805       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 17:55:35.714945       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:55:35.714988       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:55:45.708235       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 17:55:45.708322       1 main.go:299] handling current node
	I0729 17:55:45.708349       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:55:45.708439       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:55:45.708578       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:55:45.708600       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 17:55:45.708664       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:55:45.708682       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:55:55.705633       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:55:55.705697       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:55:55.705883       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 17:55:55.705909       1 main.go:299] handling current node
	I0729 17:55:55.705929       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:55:55.705934       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:55:55.706058       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:55:55.706065       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e] <==
	I0729 17:49:47.297566       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 17:49:47.314065       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 17:49:47.463932       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 17:50:00.559990       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 17:50:00.764278       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0729 17:52:02.764919       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0729 17:52:02.765219       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0729 17:52:02.765665       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 398.449µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0729 17:52:02.766888       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0729 17:52:02.767054       1 timeout.go:142] post-timeout activity - time-elapsed: 2.784713ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0729 17:52:32.629890       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43196: use of closed network connection
	E0729 17:52:32.828114       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43206: use of closed network connection
	E0729 17:52:33.022775       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43220: use of closed network connection
	E0729 17:52:33.208138       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43250: use of closed network connection
	E0729 17:52:33.396698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43270: use of closed network connection
	E0729 17:52:33.591298       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43292: use of closed network connection
	E0729 17:52:33.759147       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43300: use of closed network connection
	E0729 17:52:33.941785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43312: use of closed network connection
	E0729 17:52:34.115769       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43328: use of closed network connection
	E0729 17:52:34.403229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43352: use of closed network connection
	E0729 17:52:34.577470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43368: use of closed network connection
	E0729 17:52:34.756509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43394: use of closed network connection
	E0729 17:52:35.108999       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43424: use of closed network connection
	E0729 17:52:35.285603       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43428: use of closed network connection
	W0729 17:53:55.769696       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.185]
	
	
	==> kube-controller-manager [152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a] <==
	I0729 17:52:29.351801       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.306µs"
	I0729 17:52:29.352301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.836µs"
	I0729 17:52:29.352888       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.273µs"
	I0729 17:52:29.468088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.39351ms"
	I0729 17:52:29.468289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.648µs"
	I0729 17:52:30.154671       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.487µs"
	I0729 17:52:30.168073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.181µs"
	I0729 17:52:30.190577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.195µs"
	I0729 17:52:30.206651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.85µs"
	I0729 17:52:30.229419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.558µs"
	I0729 17:52:30.247923       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.998µs"
	I0729 17:52:31.361672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.355758ms"
	I0729 17:52:31.363088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.773µs"
	I0729 17:52:31.619902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.939654ms"
	I0729 17:52:31.620005       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.5µs"
	I0729 17:52:32.173953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.036769ms"
	I0729 17:52:32.174218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="129.3µs"
	E0729 17:53:04.379805       1 certificate_controller.go:146] Sync csr-2lzzf failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2lzzf": the object has been modified; please apply your changes to the latest version and try again
	I0729 17:53:04.624477       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-794405-m04\" does not exist"
	I0729 17:53:04.713030       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-794405-m04" podCIDRs=["10.244.3.0/24"]
	I0729 17:53:05.800264       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-794405-m04"
	I0729 17:53:23.171449       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-794405-m04"
	I0729 17:54:25.829806       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-794405-m04"
	I0729 17:54:26.038165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.586432ms"
	I0729 17:54:26.038304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.486µs"
	
	
	==> kube-proxy [2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f] <==
	I0729 17:50:01.835652       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:50:01.858952       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	I0729 17:50:01.952952       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:50:01.953017       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:50:01.953035       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:50:01.958159       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:50:01.958471       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:50:01.958501       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:50:01.959959       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:50:01.960004       1 config.go:192] "Starting service config controller"
	I0729 17:50:01.960203       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:50:01.960205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:50:01.961278       1 config.go:319] "Starting node config controller"
	I0729 17:50:01.961285       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:50:02.061293       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:50:02.061493       1 shared_informer.go:320] Caches are synced for node config
	I0729 17:50:02.061523       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25] <==
	W0729 17:49:45.647067       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:49:45.647163       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 17:49:48.625258       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 17:52:01.980170       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ndmlm\": pod kube-proxy-ndmlm is already assigned to node \"ha-794405-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ndmlm" node="ha-794405-m03"
	E0729 17:52:01.980690       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ndmlm\": pod kube-proxy-ndmlm is already assigned to node \"ha-794405-m03\"" pod="kube-system/kube-proxy-ndmlm"
	E0729 17:52:02.039551       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sw765\": pod kube-proxy-sw765 is already assigned to node \"ha-794405-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sw765" node="ha-794405-m03"
	E0729 17:52:02.039623       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 02b5f9f8-0406-4261-bd3b-7661ddc6ddd0(kube-system/kube-proxy-sw765) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sw765"
	E0729 17:52:02.039643       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sw765\": pod kube-proxy-sw765 is already assigned to node \"ha-794405-m03\"" pod="kube-system/kube-proxy-sw765"
	I0729 17:52:02.039657       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sw765" node="ha-794405-m03"
	E0729 17:53:04.694842       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nrw9z\": pod kube-proxy-nrw9z is already assigned to node \"ha-794405-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nrw9z" node="ha-794405-m04"
	E0729 17:53:04.695637       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bceaebd9-016e-4ebb-ae2e-b926486cde55(kube-system/kube-proxy-nrw9z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nrw9z"
	E0729 17:53:04.695848       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nrw9z\": pod kube-proxy-nrw9z is already assigned to node \"ha-794405-m04\"" pod="kube-system/kube-proxy-nrw9z"
	I0729 17:53:04.695953       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nrw9z" node="ha-794405-m04"
	E0729 17:53:04.697141       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ndgvz\": pod kindnet-ndgvz is already assigned to node \"ha-794405-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ndgvz" node="ha-794405-m04"
	E0729 17:53:04.697804       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dac03401-2d2d-4972-b74f-cf1918668c7f(kube-system/kindnet-ndgvz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ndgvz"
	E0729 17:53:04.697917       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ndgvz\": pod kindnet-ndgvz is already assigned to node \"ha-794405-m04\"" pod="kube-system/kindnet-ndgvz"
	I0729 17:53:04.698038       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ndgvz" node="ha-794405-m04"
	E0729 17:53:04.863070       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tfmmp\": pod kube-proxy-tfmmp is already assigned to node \"ha-794405-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tfmmp" node="ha-794405-m04"
	E0729 17:53:04.863407       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d0afa891-9c8f-4853-947e-8772e52029d8(kube-system/kube-proxy-tfmmp) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tfmmp"
	E0729 17:53:04.863492       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tfmmp\": pod kube-proxy-tfmmp is already assigned to node \"ha-794405-m04\"" pod="kube-system/kube-proxy-tfmmp"
	I0729 17:53:04.863555       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tfmmp" node="ha-794405-m04"
	E0729 17:53:04.866462       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bkgfr\": pod kindnet-bkgfr is already assigned to node \"ha-794405-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bkgfr" node="ha-794405-m04"
	E0729 17:53:04.866574       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d2e31787-c905-4df5-9d46-7f0ceaf731e6(kube-system/kindnet-bkgfr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bkgfr"
	E0729 17:53:04.866597       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bkgfr\": pod kindnet-bkgfr is already assigned to node \"ha-794405-m04\"" pod="kube-system/kindnet-bkgfr"
	I0729 17:53:04.866691       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bkgfr" node="ha-794405-m04"
	
	
	==> kubelet <==
	Jul 29 17:52:29 ha-794405 kubelet[1375]: E0729 17:52:29.122501    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-n6jnm], unattached volumes=[], failed to process volumes=[kube-api-access-n6jnm]: context canceled" pod="default/busybox-fc5497c4f-rwwkk" podUID="23b90576-c1e2-4995-b6b8-0050d5d13221"
	Jul 29 17:52:30 ha-794405 kubelet[1375]: I0729 17:52:30.167130    1375 topology_manager.go:215] "Topology Admit Handler" podUID="ceb96a8b-de79-4d8b-a767-8e61b163b088" podNamespace="default" podName="busybox-fc5497c4f-9t4xg"
	Jul 29 17:52:30 ha-794405 kubelet[1375]: I0729 17:52:30.297443    1375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj5nz\" (UniqueName: \"kubernetes.io/projected/ceb96a8b-de79-4d8b-a767-8e61b163b088-kube-api-access-lj5nz\") pod \"busybox-fc5497c4f-9t4xg\" (UID: \"ceb96a8b-de79-4d8b-a767-8e61b163b088\") " pod="default/busybox-fc5497c4f-9t4xg"
	Jul 29 17:52:31 ha-794405 kubelet[1375]: I0729 17:52:31.492821    1375 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b90576-c1e2-4995-b6b8-0050d5d13221" path="/var/lib/kubelet/pods/23b90576-c1e2-4995-b6b8-0050d5d13221/volumes"
	Jul 29 17:52:32 ha-794405 kubelet[1375]: I0729 17:52:32.163844    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-9t4xg" podStartSLOduration=3.300540166 podStartE2EDuration="4.163796615s" podCreationTimestamp="2024-07-29 17:52:28 +0000 UTC" firstStartedPulling="2024-07-29 17:52:30.640234402 +0000 UTC m=+163.365886011" lastFinishedPulling="2024-07-29 17:52:31.50349085 +0000 UTC m=+164.229142460" observedRunningTime="2024-07-29 17:52:32.163056204 +0000 UTC m=+164.888707836" watchObservedRunningTime="2024-07-29 17:52:32.163796615 +0000 UTC m=+164.889448259"
	Jul 29 17:52:47 ha-794405 kubelet[1375]: E0729 17:52:47.519288    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:52:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:52:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:52:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:52:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:53:47 ha-794405 kubelet[1375]: E0729 17:53:47.516416    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:53:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:53:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:53:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:53:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:54:47 ha-794405 kubelet[1375]: E0729 17:54:47.516417    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:54:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:54:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:54:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:54:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:55:47 ha-794405 kubelet[1375]: E0729 17:55:47.514834    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:55:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:55:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:55:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:55:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-794405 -n ha-794405
helpers_test.go:261: (dbg) Run:  kubectl --context ha-794405 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (57.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr: exit status 3 (3.219008478s)

                                                
                                                
-- stdout --
	ha-794405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-794405-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:56:08.433585  110442 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:56:08.433683  110442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:08.433691  110442 out.go:304] Setting ErrFile to fd 2...
	I0729 17:56:08.433696  110442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:08.433935  110442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:56:08.434131  110442 out.go:298] Setting JSON to false
	I0729 17:56:08.434160  110442 mustload.go:65] Loading cluster: ha-794405
	I0729 17:56:08.434208  110442 notify.go:220] Checking for updates...
	I0729 17:56:08.434622  110442 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:56:08.434645  110442 status.go:255] checking status of ha-794405 ...
	I0729 17:56:08.435158  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:08.435206  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:08.455476  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34237
	I0729 17:56:08.455862  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:08.456468  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:08.456488  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:08.456955  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:08.457200  110442 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:56:08.458741  110442 status.go:330] ha-794405 host status = "Running" (err=<nil>)
	I0729 17:56:08.458759  110442 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:08.459045  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:08.459089  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:08.473371  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0729 17:56:08.473733  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:08.474167  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:08.474194  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:08.474488  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:08.474660  110442 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:56:08.477116  110442 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:08.477507  110442 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:08.477532  110442 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:08.477670  110442 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:08.477996  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:08.478038  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:08.492191  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36745
	I0729 17:56:08.492553  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:08.492975  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:08.492994  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:08.493308  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:08.493494  110442 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:56:08.493698  110442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:08.493731  110442 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:56:08.496200  110442 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:08.496618  110442 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:08.496643  110442 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:08.496804  110442 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:56:08.496993  110442 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:56:08.497127  110442 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:56:08.497288  110442 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:56:08.578132  110442 ssh_runner.go:195] Run: systemctl --version
	I0729 17:56:08.586315  110442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:08.602426  110442 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:08.602455  110442 api_server.go:166] Checking apiserver status ...
	I0729 17:56:08.602485  110442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:08.616594  110442 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0729 17:56:08.626482  110442 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:08.626527  110442 ssh_runner.go:195] Run: ls
	I0729 17:56:08.631101  110442 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:08.635397  110442 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:08.635418  110442 status.go:422] ha-794405 apiserver status = Running (err=<nil>)
	I0729 17:56:08.635428  110442 status.go:257] ha-794405 status: &{Name:ha-794405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:08.635447  110442 status.go:255] checking status of ha-794405-m02 ...
	I0729 17:56:08.635791  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:08.635848  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:08.650596  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
	I0729 17:56:08.651017  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:08.651503  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:08.651527  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:08.651866  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:08.652047  110442 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:56:08.653677  110442 status.go:330] ha-794405-m02 host status = "Running" (err=<nil>)
	I0729 17:56:08.653698  110442 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:08.653982  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:08.654021  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:08.669103  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0729 17:56:08.669583  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:08.670098  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:08.670124  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:08.670468  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:08.670679  110442 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:56:08.673327  110442 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:08.673762  110442 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:08.673786  110442 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:08.673964  110442 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:08.674321  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:08.674370  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:08.689782  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40095
	I0729 17:56:08.690209  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:08.690657  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:08.690678  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:08.690984  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:08.691139  110442 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:56:08.691303  110442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:08.691325  110442 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:56:08.693928  110442 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:08.694349  110442 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:08.694376  110442 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:08.694517  110442 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:56:08.694657  110442 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:56:08.694836  110442 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:56:08.694962  110442 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	W0729 17:56:11.241182  110442 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	W0729 17:56:11.241270  110442 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	E0729 17:56:11.241302  110442 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:11.241311  110442 status.go:257] ha-794405-m02 status: &{Name:ha-794405-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:56:11.241333  110442 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:11.241340  110442 status.go:255] checking status of ha-794405-m03 ...
	I0729 17:56:11.241633  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:11.241682  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:11.256803  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I0729 17:56:11.257320  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:11.257841  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:11.257868  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:11.258158  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:11.258333  110442 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:56:11.259943  110442 status.go:330] ha-794405-m03 host status = "Running" (err=<nil>)
	I0729 17:56:11.259963  110442 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:11.260287  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:11.260326  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:11.276994  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46121
	I0729 17:56:11.277365  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:11.277837  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:11.277864  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:11.278187  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:11.278412  110442 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:56:11.281685  110442 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:11.282171  110442 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:11.282200  110442 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:11.282360  110442 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:11.282767  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:11.282808  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:11.298359  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43649
	I0729 17:56:11.298764  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:11.299203  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:11.299224  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:11.299536  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:11.299730  110442 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:56:11.299912  110442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:11.299933  110442 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:56:11.302914  110442 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:11.303351  110442 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:11.303377  110442 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:11.303577  110442 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:56:11.303750  110442 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:56:11.303900  110442 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:56:11.304046  110442 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:56:11.385540  110442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:11.401304  110442 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:11.401335  110442 api_server.go:166] Checking apiserver status ...
	I0729 17:56:11.401378  110442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:11.419020  110442 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 17:56:11.434886  110442 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:11.434956  110442 ssh_runner.go:195] Run: ls
	I0729 17:56:11.442275  110442 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:11.448238  110442 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:11.448260  110442 status.go:422] ha-794405-m03 apiserver status = Running (err=<nil>)
	I0729 17:56:11.448268  110442 status.go:257] ha-794405-m03 status: &{Name:ha-794405-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:11.448293  110442 status.go:255] checking status of ha-794405-m04 ...
	I0729 17:56:11.448572  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:11.448603  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:11.463870  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46787
	I0729 17:56:11.464272  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:11.464910  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:11.464936  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:11.465287  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:11.465490  110442 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:56:11.467200  110442 status.go:330] ha-794405-m04 host status = "Running" (err=<nil>)
	I0729 17:56:11.467221  110442 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:11.467562  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:11.467608  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:11.484085  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0729 17:56:11.484489  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:11.485018  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:11.485049  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:11.485381  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:11.485548  110442 main.go:141] libmachine: (ha-794405-m04) Calling .GetIP
	I0729 17:56:11.488371  110442 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:11.488766  110442 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:11.488806  110442 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:11.489017  110442 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:11.489428  110442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:11.489473  110442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:11.504180  110442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I0729 17:56:11.504639  110442 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:11.505119  110442 main.go:141] libmachine: Using API Version  1
	I0729 17:56:11.505140  110442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:11.505449  110442 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:11.505628  110442 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 17:56:11.505833  110442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:11.505853  110442 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 17:56:11.508849  110442 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:11.509323  110442 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:11.509342  110442 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:11.509508  110442 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 17:56:11.509734  110442 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 17:56:11.509919  110442 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 17:56:11.510066  110442 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 17:56:11.592818  110442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:11.606690  110442 status.go:257] ha-794405-m04 status: &{Name:ha-794405-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr: exit status 3 (4.762320333s)

                                                
                                                
-- stdout --
	ha-794405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-794405-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:56:13.017509  110542 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:56:13.017741  110542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:13.017749  110542 out.go:304] Setting ErrFile to fd 2...
	I0729 17:56:13.017754  110542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:13.017923  110542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:56:13.018077  110542 out.go:298] Setting JSON to false
	I0729 17:56:13.018096  110542 mustload.go:65] Loading cluster: ha-794405
	I0729 17:56:13.018187  110542 notify.go:220] Checking for updates...
	I0729 17:56:13.018481  110542 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:56:13.018503  110542 status.go:255] checking status of ha-794405 ...
	I0729 17:56:13.018932  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:13.018986  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:13.034361  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I0729 17:56:13.034772  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:13.035362  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:13.035383  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:13.035735  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:13.035943  110542 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:56:13.037738  110542 status.go:330] ha-794405 host status = "Running" (err=<nil>)
	I0729 17:56:13.037754  110542 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:13.038021  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:13.038056  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:13.052962  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0729 17:56:13.053381  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:13.053825  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:13.053850  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:13.054156  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:13.054337  110542 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:56:13.056939  110542 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:13.057359  110542 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:13.057378  110542 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:13.057500  110542 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:13.057822  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:13.057877  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:13.074375  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I0729 17:56:13.074797  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:13.075283  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:13.075309  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:13.075633  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:13.075825  110542 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:56:13.076067  110542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:13.076106  110542 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:56:13.078911  110542 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:13.079411  110542 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:13.079447  110542 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:13.079561  110542 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:56:13.079750  110542 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:56:13.079897  110542 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:56:13.080018  110542 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:56:13.160725  110542 ssh_runner.go:195] Run: systemctl --version
	I0729 17:56:13.167251  110542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:13.185582  110542 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:13.185610  110542 api_server.go:166] Checking apiserver status ...
	I0729 17:56:13.185642  110542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:13.200729  110542 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0729 17:56:13.211459  110542 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:13.211532  110542 ssh_runner.go:195] Run: ls
	I0729 17:56:13.215857  110542 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:13.222169  110542 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:13.222197  110542 status.go:422] ha-794405 apiserver status = Running (err=<nil>)
	I0729 17:56:13.222211  110542 status.go:257] ha-794405 status: &{Name:ha-794405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:13.222232  110542 status.go:255] checking status of ha-794405-m02 ...
	I0729 17:56:13.222579  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:13.222614  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:13.237880  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0729 17:56:13.238278  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:13.238737  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:13.238761  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:13.239089  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:13.239281  110542 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:56:13.240808  110542 status.go:330] ha-794405-m02 host status = "Running" (err=<nil>)
	I0729 17:56:13.240824  110542 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:13.241143  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:13.241207  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:13.255440  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I0729 17:56:13.255924  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:13.256409  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:13.256438  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:13.256743  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:13.256997  110542 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:56:13.259594  110542 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:13.259980  110542 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:13.259999  110542 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:13.260207  110542 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:13.260525  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:13.260559  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:13.275840  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36547
	I0729 17:56:13.276277  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:13.276698  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:13.276728  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:13.277105  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:13.277252  110542 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:56:13.277444  110542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:13.277464  110542 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:56:13.279721  110542 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:13.280067  110542 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:13.280091  110542 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:13.280251  110542 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:56:13.280433  110542 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:56:13.280591  110542 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:56:13.280748  110542 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	W0729 17:56:14.313124  110542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:14.313181  110542 retry.go:31] will retry after 233.999151ms: dial tcp 192.168.39.62:22: connect: no route to host
	W0729 17:56:17.385144  110542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	W0729 17:56:17.385251  110542 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	E0729 17:56:17.385275  110542 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:17.385287  110542 status.go:257] ha-794405-m02 status: &{Name:ha-794405-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:56:17.385331  110542 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:17.385342  110542 status.go:255] checking status of ha-794405-m03 ...
	I0729 17:56:17.385675  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:17.385744  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:17.401581  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45141
	I0729 17:56:17.402088  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:17.402565  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:17.402591  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:17.402913  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:17.403129  110542 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:56:17.404649  110542 status.go:330] ha-794405-m03 host status = "Running" (err=<nil>)
	I0729 17:56:17.404670  110542 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:17.405072  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:17.405110  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:17.419517  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40377
	I0729 17:56:17.419944  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:17.420412  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:17.420432  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:17.420783  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:17.421002  110542 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:56:17.423581  110542 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:17.424051  110542 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:17.424077  110542 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:17.424197  110542 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:17.424588  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:17.424633  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:17.441172  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34331
	I0729 17:56:17.441607  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:17.442110  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:17.442132  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:17.442440  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:17.442637  110542 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:56:17.442902  110542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:17.442926  110542 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:56:17.445632  110542 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:17.446110  110542 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:17.446133  110542 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:17.446330  110542 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:56:17.446510  110542 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:56:17.446658  110542 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:56:17.446773  110542 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:56:17.524852  110542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:17.543175  110542 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:17.543211  110542 api_server.go:166] Checking apiserver status ...
	I0729 17:56:17.543248  110542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:17.557652  110542 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 17:56:17.567212  110542 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:17.567262  110542 ssh_runner.go:195] Run: ls
	I0729 17:56:17.572214  110542 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:17.578750  110542 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:17.578777  110542 status.go:422] ha-794405-m03 apiserver status = Running (err=<nil>)
	I0729 17:56:17.578788  110542 status.go:257] ha-794405-m03 status: &{Name:ha-794405-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:17.578809  110542 status.go:255] checking status of ha-794405-m04 ...
	I0729 17:56:17.579125  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:17.579172  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:17.595387  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0729 17:56:17.595776  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:17.596320  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:17.596343  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:17.596701  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:17.596910  110542 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:56:17.598309  110542 status.go:330] ha-794405-m04 host status = "Running" (err=<nil>)
	I0729 17:56:17.598327  110542 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:17.598598  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:17.598641  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:17.613640  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42795
	I0729 17:56:17.614065  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:17.614537  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:17.614574  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:17.614910  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:17.615099  110542 main.go:141] libmachine: (ha-794405-m04) Calling .GetIP
	I0729 17:56:17.617551  110542 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:17.618028  110542 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:17.618054  110542 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:17.618180  110542 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:17.618482  110542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:17.618524  110542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:17.633541  110542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I0729 17:56:17.633950  110542 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:17.634330  110542 main.go:141] libmachine: Using API Version  1
	I0729 17:56:17.634355  110542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:17.634609  110542 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:17.634824  110542 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 17:56:17.635027  110542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:17.635045  110542 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 17:56:17.637915  110542 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:17.638322  110542 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:17.638361  110542 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:17.638532  110542 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 17:56:17.638707  110542 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 17:56:17.638850  110542 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 17:56:17.638974  110542 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 17:56:17.720370  110542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:17.734911  110542 status.go:257] ha-794405-m04 status: &{Name:ha-794405-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr: exit status 3 (5.208351662s)

                                                
                                                
-- stdout --
	ha-794405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-794405-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:56:18.711422  110643 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:56:18.711534  110643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:18.711543  110643 out.go:304] Setting ErrFile to fd 2...
	I0729 17:56:18.711548  110643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:18.711723  110643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:56:18.711902  110643 out.go:298] Setting JSON to false
	I0729 17:56:18.711928  110643 mustload.go:65] Loading cluster: ha-794405
	I0729 17:56:18.712145  110643 notify.go:220] Checking for updates...
	I0729 17:56:18.712297  110643 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:56:18.712313  110643 status.go:255] checking status of ha-794405 ...
	I0729 17:56:18.712738  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:18.712798  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:18.728852  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34321
	I0729 17:56:18.729264  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:18.729805  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:18.729826  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:18.730233  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:18.730489  110643 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:56:18.732178  110643 status.go:330] ha-794405 host status = "Running" (err=<nil>)
	I0729 17:56:18.732198  110643 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:18.732608  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:18.732651  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:18.746866  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45743
	I0729 17:56:18.747241  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:18.747657  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:18.747672  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:18.747957  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:18.748146  110643 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:56:18.750857  110643 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:18.751280  110643 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:18.751315  110643 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:18.751431  110643 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:18.751701  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:18.751735  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:18.766377  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0729 17:56:18.766710  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:18.767129  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:18.767151  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:18.767449  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:18.767619  110643 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:56:18.767806  110643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:18.767832  110643 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:56:18.770337  110643 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:18.770732  110643 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:18.770770  110643 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:18.770929  110643 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:56:18.771112  110643 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:56:18.771282  110643 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:56:18.771547  110643 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:56:18.848809  110643 ssh_runner.go:195] Run: systemctl --version
	I0729 17:56:18.855735  110643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:18.870318  110643 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:18.870359  110643 api_server.go:166] Checking apiserver status ...
	I0729 17:56:18.870403  110643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:18.884483  110643 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0729 17:56:18.893731  110643 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:18.893800  110643 ssh_runner.go:195] Run: ls
	I0729 17:56:18.898433  110643 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:18.905587  110643 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:18.905614  110643 status.go:422] ha-794405 apiserver status = Running (err=<nil>)
	I0729 17:56:18.905625  110643 status.go:257] ha-794405 status: &{Name:ha-794405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:18.905648  110643 status.go:255] checking status of ha-794405-m02 ...
	I0729 17:56:18.905967  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:18.906011  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:18.921729  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36775
	I0729 17:56:18.922159  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:18.922643  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:18.922658  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:18.923019  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:18.923212  110643 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:56:18.924823  110643 status.go:330] ha-794405-m02 host status = "Running" (err=<nil>)
	I0729 17:56:18.924841  110643 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:18.925245  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:18.925287  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:18.939845  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42287
	I0729 17:56:18.940221  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:18.940656  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:18.940680  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:18.941006  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:18.941185  110643 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:56:18.943979  110643 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:18.944426  110643 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:18.944452  110643 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:18.944642  110643 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:18.944959  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:18.944993  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:18.959268  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0729 17:56:18.959613  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:18.960056  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:18.960075  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:18.960414  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:18.960610  110643 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:56:18.960815  110643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:18.960845  110643 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:56:18.963303  110643 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:18.963700  110643 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:18.963727  110643 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:18.963876  110643 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:56:18.964070  110643 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:56:18.964203  110643 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:56:18.964329  110643 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	W0729 17:56:20.457163  110643 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:20.457247  110643 retry.go:31] will retry after 295.647123ms: dial tcp 192.168.39.62:22: connect: no route to host
	W0729 17:56:23.529195  110643 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	W0729 17:56:23.529288  110643 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	E0729 17:56:23.529312  110643 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:23.529321  110643 status.go:257] ha-794405-m02 status: &{Name:ha-794405-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:56:23.529346  110643 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:23.529353  110643 status.go:255] checking status of ha-794405-m03 ...
	I0729 17:56:23.529657  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:23.529704  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:23.544970  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I0729 17:56:23.545455  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:23.546022  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:23.546056  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:23.546346  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:23.546528  110643 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:56:23.548025  110643 status.go:330] ha-794405-m03 host status = "Running" (err=<nil>)
	I0729 17:56:23.548045  110643 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:23.548350  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:23.548397  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:23.563864  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I0729 17:56:23.564271  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:23.564735  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:23.564757  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:23.565079  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:23.565276  110643 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:56:23.567821  110643 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:23.568227  110643 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:23.568253  110643 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:23.568408  110643 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:23.568752  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:23.568794  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:23.583159  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34927
	I0729 17:56:23.583568  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:23.584005  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:23.584023  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:23.584345  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:23.584538  110643 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:56:23.584750  110643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:23.584773  110643 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:56:23.587544  110643 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:23.587960  110643 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:23.587987  110643 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:23.588160  110643 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:56:23.588343  110643 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:56:23.588484  110643 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:56:23.588620  110643 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:56:23.669224  110643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:23.686767  110643 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:23.686812  110643 api_server.go:166] Checking apiserver status ...
	I0729 17:56:23.686858  110643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:23.701279  110643 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 17:56:23.711216  110643 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:23.711271  110643 ssh_runner.go:195] Run: ls
	I0729 17:56:23.716011  110643 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:23.720223  110643 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:23.720243  110643 status.go:422] ha-794405-m03 apiserver status = Running (err=<nil>)
	I0729 17:56:23.720255  110643 status.go:257] ha-794405-m03 status: &{Name:ha-794405-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:23.720275  110643 status.go:255] checking status of ha-794405-m04 ...
	I0729 17:56:23.720677  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:23.720724  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:23.735950  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0729 17:56:23.736348  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:23.736797  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:23.736814  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:23.737127  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:23.737317  110643 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:56:23.739142  110643 status.go:330] ha-794405-m04 host status = "Running" (err=<nil>)
	I0729 17:56:23.739170  110643 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:23.739486  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:23.739519  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:23.755175  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0729 17:56:23.755634  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:23.756125  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:23.756155  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:23.756439  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:23.756611  110643 main.go:141] libmachine: (ha-794405-m04) Calling .GetIP
	I0729 17:56:23.759367  110643 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:23.759808  110643 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:23.759835  110643 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:23.760016  110643 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:23.760425  110643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:23.760468  110643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:23.774993  110643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33871
	I0729 17:56:23.775389  110643 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:23.775845  110643 main.go:141] libmachine: Using API Version  1
	I0729 17:56:23.775867  110643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:23.776187  110643 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:23.776391  110643 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 17:56:23.776600  110643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:23.776620  110643 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 17:56:23.779551  110643 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:23.780007  110643 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:23.780029  110643 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:23.780171  110643 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 17:56:23.780341  110643 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 17:56:23.780499  110643 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 17:56:23.780640  110643 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 17:56:23.860531  110643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:23.875364  110643 status.go:257] ha-794405-m04 status: &{Name:ha-794405-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr: exit status 3 (4.671254599s)

                                                
                                                
-- stdout --
	ha-794405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-794405-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:56:25.556818  110761 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:56:25.556953  110761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:25.556964  110761 out.go:304] Setting ErrFile to fd 2...
	I0729 17:56:25.556971  110761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:25.557170  110761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:56:25.557351  110761 out.go:298] Setting JSON to false
	I0729 17:56:25.557384  110761 mustload.go:65] Loading cluster: ha-794405
	I0729 17:56:25.557519  110761 notify.go:220] Checking for updates...
	I0729 17:56:25.557841  110761 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:56:25.557862  110761 status.go:255] checking status of ha-794405 ...
	I0729 17:56:25.558420  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:25.558499  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:25.576308  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35215
	I0729 17:56:25.576755  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:25.577501  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:25.577541  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:25.577889  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:25.578109  110761 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:56:25.579852  110761 status.go:330] ha-794405 host status = "Running" (err=<nil>)
	I0729 17:56:25.579868  110761 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:25.580159  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:25.580215  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:25.595273  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39109
	I0729 17:56:25.595750  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:25.596264  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:25.596285  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:25.596575  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:25.596796  110761 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:56:25.599749  110761 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:25.600227  110761 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:25.600253  110761 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:25.600412  110761 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:25.600885  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:25.600932  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:25.616522  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39833
	I0729 17:56:25.616933  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:25.617393  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:25.617413  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:25.617740  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:25.617945  110761 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:56:25.618213  110761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:25.618240  110761 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:56:25.621600  110761 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:25.622019  110761 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:25.622035  110761 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:25.622199  110761 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:56:25.622350  110761 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:56:25.622469  110761 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:56:25.622561  110761 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:56:25.706270  110761 ssh_runner.go:195] Run: systemctl --version
	I0729 17:56:25.712555  110761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:25.727207  110761 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:25.727242  110761 api_server.go:166] Checking apiserver status ...
	I0729 17:56:25.727281  110761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:25.744391  110761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0729 17:56:25.754462  110761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:25.754525  110761 ssh_runner.go:195] Run: ls
	I0729 17:56:25.759112  110761 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:25.763561  110761 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:25.763589  110761 status.go:422] ha-794405 apiserver status = Running (err=<nil>)
	I0729 17:56:25.763612  110761 status.go:257] ha-794405 status: &{Name:ha-794405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:25.763643  110761 status.go:255] checking status of ha-794405-m02 ...
	I0729 17:56:25.764050  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:25.764113  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:25.779265  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
	I0729 17:56:25.779753  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:25.780258  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:25.780302  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:25.780611  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:25.780757  110761 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:56:25.782380  110761 status.go:330] ha-794405-m02 host status = "Running" (err=<nil>)
	I0729 17:56:25.782406  110761 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:25.782801  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:25.782846  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:25.798804  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44425
	I0729 17:56:25.799366  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:25.799893  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:25.799915  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:25.800279  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:25.800482  110761 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:56:25.803968  110761 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:25.804459  110761 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:25.804497  110761 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:25.804665  110761 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:25.805166  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:25.805217  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:25.820445  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46743
	I0729 17:56:25.820830  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:25.821341  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:25.821371  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:25.821685  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:25.821925  110761 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:56:25.822159  110761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:25.822185  110761 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:56:25.824954  110761 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:25.825383  110761 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:25.825407  110761 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:25.825540  110761 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:56:25.825695  110761 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:56:25.825825  110761 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:56:25.825949  110761 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	W0729 17:56:26.601091  110761 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:26.601142  110761 retry.go:31] will retry after 161.520519ms: dial tcp 192.168.39.62:22: connect: no route to host
	W0729 17:56:29.833174  110761 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	W0729 17:56:29.833286  110761 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	E0729 17:56:29.833305  110761 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:29.833313  110761 status.go:257] ha-794405-m02 status: &{Name:ha-794405-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:56:29.833341  110761 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:29.833349  110761 status.go:255] checking status of ha-794405-m03 ...
	I0729 17:56:29.833679  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:29.833721  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:29.849020  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34595
	I0729 17:56:29.849420  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:29.849871  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:29.849897  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:29.850235  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:29.850470  110761 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:56:29.852112  110761 status.go:330] ha-794405-m03 host status = "Running" (err=<nil>)
	I0729 17:56:29.852131  110761 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:29.852415  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:29.852473  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:29.868435  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41563
	I0729 17:56:29.869010  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:29.869470  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:29.869493  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:29.869815  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:29.870023  110761 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:56:29.872926  110761 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:29.873294  110761 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:29.873330  110761 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:29.873454  110761 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:29.873809  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:29.873858  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:29.889240  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I0729 17:56:29.889700  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:29.890164  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:29.890186  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:29.890454  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:29.890619  110761 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:56:29.890799  110761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:29.890829  110761 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:56:29.893536  110761 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:29.893989  110761 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:29.894022  110761 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:29.894128  110761 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:56:29.894274  110761 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:56:29.894404  110761 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:56:29.894546  110761 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:56:29.977456  110761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:29.993012  110761 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:29.993045  110761 api_server.go:166] Checking apiserver status ...
	I0729 17:56:29.993086  110761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:30.007490  110761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 17:56:30.017478  110761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:30.017540  110761 ssh_runner.go:195] Run: ls
	I0729 17:56:30.022306  110761 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:30.026557  110761 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:30.026580  110761 status.go:422] ha-794405-m03 apiserver status = Running (err=<nil>)
	I0729 17:56:30.026588  110761 status.go:257] ha-794405-m03 status: &{Name:ha-794405-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:30.026617  110761 status.go:255] checking status of ha-794405-m04 ...
	I0729 17:56:30.026895  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:30.026934  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:30.042226  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0729 17:56:30.042750  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:30.043262  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:30.043286  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:30.043582  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:30.043755  110761 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:56:30.045323  110761 status.go:330] ha-794405-m04 host status = "Running" (err=<nil>)
	I0729 17:56:30.045344  110761 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:30.045763  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:30.045803  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:30.060149  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42363
	I0729 17:56:30.060634  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:30.061137  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:30.061162  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:30.061444  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:30.061630  110761 main.go:141] libmachine: (ha-794405-m04) Calling .GetIP
	I0729 17:56:30.064360  110761 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:30.064761  110761 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:30.064794  110761 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:30.064982  110761 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:30.065282  110761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:30.065359  110761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:30.080832  110761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0729 17:56:30.081277  110761 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:30.081917  110761 main.go:141] libmachine: Using API Version  1
	I0729 17:56:30.081942  110761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:30.082258  110761 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:30.082453  110761 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 17:56:30.082675  110761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:30.082701  110761 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 17:56:30.085694  110761 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:30.086165  110761 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:30.086203  110761 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:30.086351  110761 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 17:56:30.086520  110761 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 17:56:30.086696  110761 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 17:56:30.086840  110761 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 17:56:30.168692  110761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:30.184848  110761 status.go:257] ha-794405-m04 status: &{Name:ha-794405-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr: exit status 3 (3.717696605s)

                                                
                                                
-- stdout --
	ha-794405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-794405-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:56:33.086845  110860 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:56:33.086965  110860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:33.086975  110860 out.go:304] Setting ErrFile to fd 2...
	I0729 17:56:33.086979  110860 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:33.087154  110860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:56:33.087356  110860 out.go:298] Setting JSON to false
	I0729 17:56:33.087385  110860 mustload.go:65] Loading cluster: ha-794405
	I0729 17:56:33.087497  110860 notify.go:220] Checking for updates...
	I0729 17:56:33.087868  110860 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:56:33.087886  110860 status.go:255] checking status of ha-794405 ...
	I0729 17:56:33.088270  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:33.088328  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:33.106968  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42015
	I0729 17:56:33.107568  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:33.108270  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:33.108300  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:33.108755  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:33.109007  110860 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:56:33.110930  110860 status.go:330] ha-794405 host status = "Running" (err=<nil>)
	I0729 17:56:33.110952  110860 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:33.111268  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:33.111320  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:33.126613  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0729 17:56:33.127050  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:33.127548  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:33.127569  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:33.127903  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:33.128087  110860 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:56:33.130869  110860 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:33.131270  110860 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:33.131306  110860 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:33.131434  110860 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:33.131714  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:33.131748  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:33.146769  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0729 17:56:33.147161  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:33.147600  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:33.147630  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:33.147909  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:33.148085  110860 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:56:33.148251  110860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:33.148273  110860 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:56:33.150965  110860 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:33.151387  110860 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:33.151418  110860 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:33.151556  110860 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:56:33.151728  110860 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:56:33.151858  110860 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:56:33.152024  110860 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:56:33.232464  110860 ssh_runner.go:195] Run: systemctl --version
	I0729 17:56:33.238843  110860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:33.258495  110860 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:33.258538  110860 api_server.go:166] Checking apiserver status ...
	I0729 17:56:33.258580  110860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:33.271742  110860 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0729 17:56:33.281293  110860 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:33.281354  110860 ssh_runner.go:195] Run: ls
	I0729 17:56:33.286220  110860 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:33.290534  110860 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:33.290554  110860 status.go:422] ha-794405 apiserver status = Running (err=<nil>)
	I0729 17:56:33.290564  110860 status.go:257] ha-794405 status: &{Name:ha-794405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:33.290591  110860 status.go:255] checking status of ha-794405-m02 ...
	I0729 17:56:33.290872  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:33.290905  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:33.306533  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I0729 17:56:33.306955  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:33.307438  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:33.307458  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:33.307764  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:33.307960  110860 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:56:33.309510  110860 status.go:330] ha-794405-m02 host status = "Running" (err=<nil>)
	I0729 17:56:33.309527  110860 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:33.309850  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:33.309896  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:33.325385  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I0729 17:56:33.325757  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:33.326225  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:33.326243  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:33.326543  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:33.326726  110860 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:56:33.329184  110860 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:33.329615  110860 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:33.329637  110860 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:33.329845  110860 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:33.330303  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:33.330354  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:33.345612  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33959
	I0729 17:56:33.345952  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:33.346427  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:33.346462  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:33.346769  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:33.346949  110860 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:56:33.347122  110860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:33.347145  110860 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:56:33.349825  110860 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:33.350317  110860 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:33.350341  110860 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:33.350483  110860 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:56:33.350656  110860 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:56:33.350785  110860 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:56:33.350899  110860 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	W0729 17:56:36.425104  110860 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	W0729 17:56:36.425216  110860 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	E0729 17:56:36.425238  110860 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:36.425250  110860 status.go:257] ha-794405-m02 status: &{Name:ha-794405-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:56:36.425275  110860 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:36.425291  110860 status.go:255] checking status of ha-794405-m03 ...
	I0729 17:56:36.425753  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:36.425810  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:36.442163  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34837
	I0729 17:56:36.442600  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:36.443044  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:36.443080  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:36.443411  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:36.443598  110860 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:56:36.445371  110860 status.go:330] ha-794405-m03 host status = "Running" (err=<nil>)
	I0729 17:56:36.445401  110860 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:36.445698  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:36.445749  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:36.459670  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0729 17:56:36.460064  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:36.460524  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:36.460547  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:36.460870  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:36.461056  110860 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:56:36.463328  110860 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:36.463685  110860 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:36.463719  110860 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:36.463792  110860 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:36.464076  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:36.464114  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:36.478652  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41469
	I0729 17:56:36.479160  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:36.479656  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:36.479682  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:36.480036  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:36.480217  110860 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:56:36.480374  110860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:36.480394  110860 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:56:36.482867  110860 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:36.483241  110860 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:36.483277  110860 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:36.483378  110860 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:56:36.483515  110860 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:56:36.483642  110860 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:56:36.483750  110860 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:56:36.560639  110860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:36.574498  110860 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:36.574530  110860 api_server.go:166] Checking apiserver status ...
	I0729 17:56:36.574567  110860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:36.588610  110860 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 17:56:36.597701  110860 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:36.597740  110860 ssh_runner.go:195] Run: ls
	I0729 17:56:36.602062  110860 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:36.606389  110860 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:36.606411  110860 status.go:422] ha-794405-m03 apiserver status = Running (err=<nil>)
	I0729 17:56:36.606420  110860 status.go:257] ha-794405-m03 status: &{Name:ha-794405-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:36.606434  110860 status.go:255] checking status of ha-794405-m04 ...
	I0729 17:56:36.606720  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:36.606756  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:36.621627  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I0729 17:56:36.622070  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:36.622662  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:36.622688  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:36.623032  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:36.623230  110860 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:56:36.624891  110860 status.go:330] ha-794405-m04 host status = "Running" (err=<nil>)
	I0729 17:56:36.624912  110860 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:36.625215  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:36.625249  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:36.639942  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33789
	I0729 17:56:36.640291  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:36.640776  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:36.640796  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:36.641114  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:36.641293  110860 main.go:141] libmachine: (ha-794405-m04) Calling .GetIP
	I0729 17:56:36.643634  110860 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:36.644115  110860 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:36.644157  110860 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:36.644289  110860 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:36.644611  110860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:36.644646  110860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:36.658891  110860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0729 17:56:36.659221  110860 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:36.659637  110860 main.go:141] libmachine: Using API Version  1
	I0729 17:56:36.659662  110860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:36.659950  110860 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:36.660118  110860 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 17:56:36.660292  110860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:36.660310  110860 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 17:56:36.663120  110860 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:36.663528  110860 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:36.663549  110860 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:36.663683  110860 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 17:56:36.663849  110860 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 17:56:36.664026  110860 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 17:56:36.664146  110860 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 17:56:36.748482  110860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:36.762157  110860 status.go:257] ha-794405-m04 status: &{Name:ha-794405-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr: exit status 3 (3.735374326s)

                                                
                                                
-- stdout --
	ha-794405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-794405-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:56:41.755644  110977 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:56:41.755783  110977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:41.755793  110977 out.go:304] Setting ErrFile to fd 2...
	I0729 17:56:41.755799  110977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:41.755976  110977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:56:41.756161  110977 out.go:298] Setting JSON to false
	I0729 17:56:41.756195  110977 mustload.go:65] Loading cluster: ha-794405
	I0729 17:56:41.756241  110977 notify.go:220] Checking for updates...
	I0729 17:56:41.756754  110977 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:56:41.756778  110977 status.go:255] checking status of ha-794405 ...
	I0729 17:56:41.757209  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:41.757256  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:41.772576  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39965
	I0729 17:56:41.773090  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:41.773679  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:41.773699  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:41.774092  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:41.774285  110977 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:56:41.776048  110977 status.go:330] ha-794405 host status = "Running" (err=<nil>)
	I0729 17:56:41.776067  110977 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:41.776499  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:41.776554  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:41.794083  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I0729 17:56:41.794517  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:41.795031  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:41.795055  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:41.795336  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:41.795595  110977 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:56:41.798799  110977 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:41.799248  110977 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:41.799283  110977 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:41.799378  110977 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:41.799760  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:41.799805  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:41.814350  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I0729 17:56:41.814716  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:41.815163  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:41.815191  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:41.815502  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:41.815699  110977 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:56:41.815924  110977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:41.815949  110977 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:56:41.818387  110977 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:41.818806  110977 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:41.818836  110977 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:41.818947  110977 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:56:41.819125  110977 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:56:41.819273  110977 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:56:41.819417  110977 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:56:41.900628  110977 ssh_runner.go:195] Run: systemctl --version
	I0729 17:56:41.907198  110977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:41.923390  110977 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:41.923419  110977 api_server.go:166] Checking apiserver status ...
	I0729 17:56:41.923457  110977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:41.937620  110977 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0729 17:56:41.947203  110977 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:41.947257  110977 ssh_runner.go:195] Run: ls
	I0729 17:56:41.951967  110977 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:41.956229  110977 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:41.956257  110977 status.go:422] ha-794405 apiserver status = Running (err=<nil>)
	I0729 17:56:41.956270  110977 status.go:257] ha-794405 status: &{Name:ha-794405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:41.956294  110977 status.go:255] checking status of ha-794405-m02 ...
	I0729 17:56:41.956743  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:41.956798  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:41.973930  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I0729 17:56:41.974410  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:41.974990  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:41.975017  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:41.975290  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:41.975495  110977 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:56:41.976918  110977 status.go:330] ha-794405-m02 host status = "Running" (err=<nil>)
	I0729 17:56:41.976936  110977 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:41.977255  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:41.977304  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:41.991651  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0729 17:56:41.992020  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:41.992486  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:41.992513  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:41.992868  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:41.993091  110977 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:56:41.996432  110977 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:41.996930  110977 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:41.996956  110977 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:41.997146  110977 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 17:56:41.997547  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:41.997593  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:42.012642  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0729 17:56:42.013115  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:42.013575  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:42.013593  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:42.013871  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:42.014061  110977 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:56:42.014212  110977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:42.014230  110977 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:56:42.016919  110977 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:42.017393  110977 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:56:42.017412  110977 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:56:42.017537  110977 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:56:42.017721  110977 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:56:42.017876  110977 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:56:42.018032  110977 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	W0729 17:56:45.097082  110977 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	W0729 17:56:45.097205  110977 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	E0729 17:56:45.097238  110977 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:45.097250  110977 status.go:257] ha-794405-m02 status: &{Name:ha-794405-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 17:56:45.097274  110977 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0729 17:56:45.097288  110977 status.go:255] checking status of ha-794405-m03 ...
	I0729 17:56:45.097666  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:45.097755  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:45.112549  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0729 17:56:45.113008  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:45.113473  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:45.113491  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:45.113864  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:45.114065  110977 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:56:45.115818  110977 status.go:330] ha-794405-m03 host status = "Running" (err=<nil>)
	I0729 17:56:45.115834  110977 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:45.116190  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:45.116235  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:45.130651  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46815
	I0729 17:56:45.131020  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:45.131449  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:45.131469  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:45.131819  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:45.132014  110977 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:56:45.134703  110977 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:45.135097  110977 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:45.135118  110977 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:45.135215  110977 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:45.135541  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:45.135581  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:45.151156  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46817
	I0729 17:56:45.151584  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:45.152068  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:45.152088  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:45.152430  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:45.152602  110977 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:56:45.152767  110977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:45.152789  110977 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:56:45.155600  110977 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:45.156068  110977 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:45.156095  110977 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:45.156231  110977 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:56:45.156406  110977 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:56:45.156576  110977 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:56:45.156724  110977 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:56:45.236806  110977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:45.252449  110977 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:45.252477  110977 api_server.go:166] Checking apiserver status ...
	I0729 17:56:45.252517  110977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:45.267106  110977 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 17:56:45.277185  110977 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:45.277238  110977 ssh_runner.go:195] Run: ls
	I0729 17:56:45.281933  110977 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:45.289489  110977 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:45.289523  110977 status.go:422] ha-794405-m03 apiserver status = Running (err=<nil>)
	I0729 17:56:45.289536  110977 status.go:257] ha-794405-m03 status: &{Name:ha-794405-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:45.289557  110977 status.go:255] checking status of ha-794405-m04 ...
	I0729 17:56:45.290026  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:45.290087  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:45.305691  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40679
	I0729 17:56:45.306075  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:45.306630  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:45.306655  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:45.306981  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:45.307185  110977 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:56:45.308652  110977 status.go:330] ha-794405-m04 host status = "Running" (err=<nil>)
	I0729 17:56:45.308670  110977 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:45.309090  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:45.309157  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:45.323288  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40577
	I0729 17:56:45.323753  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:45.324189  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:45.324212  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:45.324505  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:45.324683  110977 main.go:141] libmachine: (ha-794405-m04) Calling .GetIP
	I0729 17:56:45.327603  110977 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:45.328043  110977 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:45.328080  110977 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:45.328187  110977 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:45.328580  110977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:45.328625  110977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:45.343062  110977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I0729 17:56:45.343416  110977 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:45.343844  110977 main.go:141] libmachine: Using API Version  1
	I0729 17:56:45.343863  110977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:45.344170  110977 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:45.344324  110977 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 17:56:45.344476  110977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:45.344501  110977 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 17:56:45.347208  110977 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:45.347610  110977 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:45.347635  110977 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:45.347829  110977 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 17:56:45.347991  110977 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 17:56:45.348151  110977 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 17:56:45.348275  110977 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 17:56:45.432537  110977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:45.447045  110977 status.go:257] ha-794405-m04 status: &{Name:ha-794405-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr: exit status 7 (606.577082ms)

                                                
                                                
-- stdout --
	ha-794405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-794405-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:56:52.366258  111109 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:56:52.366364  111109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:52.366372  111109 out.go:304] Setting ErrFile to fd 2...
	I0729 17:56:52.366376  111109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:56:52.366575  111109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:56:52.366809  111109 out.go:298] Setting JSON to false
	I0729 17:56:52.366850  111109 mustload.go:65] Loading cluster: ha-794405
	I0729 17:56:52.366962  111109 notify.go:220] Checking for updates...
	I0729 17:56:52.367301  111109 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:56:52.367322  111109 status.go:255] checking status of ha-794405 ...
	I0729 17:56:52.367849  111109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:52.367925  111109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:52.386402  111109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I0729 17:56:52.386838  111109 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:52.387418  111109 main.go:141] libmachine: Using API Version  1
	I0729 17:56:52.387439  111109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:52.387869  111109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:52.388075  111109 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:56:52.389735  111109 status.go:330] ha-794405 host status = "Running" (err=<nil>)
	I0729 17:56:52.389766  111109 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:52.390055  111109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:52.390090  111109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:52.406283  111109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0729 17:56:52.406697  111109 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:52.407230  111109 main.go:141] libmachine: Using API Version  1
	I0729 17:56:52.407258  111109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:52.407561  111109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:52.407748  111109 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:56:52.410623  111109 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:52.411036  111109 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:52.411060  111109 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:52.411159  111109 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:56:52.411444  111109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:52.411496  111109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:52.426134  111109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44553
	I0729 17:56:52.426554  111109 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:52.426996  111109 main.go:141] libmachine: Using API Version  1
	I0729 17:56:52.427014  111109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:52.427316  111109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:52.427450  111109 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:56:52.427618  111109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:52.427641  111109 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:56:52.430278  111109 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:52.430633  111109 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:56:52.430667  111109 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:56:52.430758  111109 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:56:52.430937  111109 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:56:52.431088  111109 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:56:52.431227  111109 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:56:52.513722  111109 ssh_runner.go:195] Run: systemctl --version
	I0729 17:56:52.520347  111109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:52.537756  111109 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:52.537783  111109 api_server.go:166] Checking apiserver status ...
	I0729 17:56:52.537822  111109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:52.552993  111109 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0729 17:56:52.562863  111109 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:52.562908  111109 ssh_runner.go:195] Run: ls
	I0729 17:56:52.567820  111109 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:52.571983  111109 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:52.572005  111109 status.go:422] ha-794405 apiserver status = Running (err=<nil>)
	I0729 17:56:52.572015  111109 status.go:257] ha-794405 status: &{Name:ha-794405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:52.572031  111109 status.go:255] checking status of ha-794405-m02 ...
	I0729 17:56:52.572309  111109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:52.572341  111109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:52.587272  111109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0729 17:56:52.587719  111109 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:52.588228  111109 main.go:141] libmachine: Using API Version  1
	I0729 17:56:52.588258  111109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:52.588662  111109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:52.588882  111109 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:56:52.590557  111109 status.go:330] ha-794405-m02 host status = "Stopped" (err=<nil>)
	I0729 17:56:52.590571  111109 status.go:343] host is not running, skipping remaining checks
	I0729 17:56:52.590578  111109 status.go:257] ha-794405-m02 status: &{Name:ha-794405-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:52.590604  111109 status.go:255] checking status of ha-794405-m03 ...
	I0729 17:56:52.590955  111109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:52.591010  111109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:52.605546  111109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
	I0729 17:56:52.605946  111109 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:52.606429  111109 main.go:141] libmachine: Using API Version  1
	I0729 17:56:52.606454  111109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:52.606729  111109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:52.606898  111109 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:56:52.608420  111109 status.go:330] ha-794405-m03 host status = "Running" (err=<nil>)
	I0729 17:56:52.608436  111109 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:52.608723  111109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:52.608765  111109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:52.623087  111109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34081
	I0729 17:56:52.623448  111109 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:52.623978  111109 main.go:141] libmachine: Using API Version  1
	I0729 17:56:52.624007  111109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:52.624317  111109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:52.624486  111109 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:56:52.627226  111109 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:52.627653  111109 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:52.627678  111109 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:52.627813  111109 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:56:52.628157  111109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:52.628209  111109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:52.643655  111109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0729 17:56:52.644137  111109 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:52.644632  111109 main.go:141] libmachine: Using API Version  1
	I0729 17:56:52.644658  111109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:52.645010  111109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:52.645161  111109 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:56:52.645291  111109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:52.645321  111109 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:56:52.647802  111109 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:52.648211  111109 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:56:52.648236  111109 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:56:52.648406  111109 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:56:52.648550  111109 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:56:52.648696  111109 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:56:52.648784  111109 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:56:52.729206  111109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:52.742945  111109 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:56:52.742973  111109 api_server.go:166] Checking apiserver status ...
	I0729 17:56:52.743008  111109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:56:52.757967  111109 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 17:56:52.768180  111109 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:56:52.768243  111109 ssh_runner.go:195] Run: ls
	I0729 17:56:52.773461  111109 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:56:52.777731  111109 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:56:52.777754  111109 status.go:422] ha-794405-m03 apiserver status = Running (err=<nil>)
	I0729 17:56:52.777764  111109 status.go:257] ha-794405-m03 status: &{Name:ha-794405-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:56:52.777785  111109 status.go:255] checking status of ha-794405-m04 ...
	I0729 17:56:52.778082  111109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:52.778147  111109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:52.793201  111109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38867
	I0729 17:56:52.793680  111109 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:52.794152  111109 main.go:141] libmachine: Using API Version  1
	I0729 17:56:52.794175  111109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:52.794476  111109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:52.794715  111109 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:56:52.796243  111109 status.go:330] ha-794405-m04 host status = "Running" (err=<nil>)
	I0729 17:56:52.796260  111109 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:52.796529  111109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:52.796572  111109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:52.811997  111109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0729 17:56:52.812384  111109 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:52.813003  111109 main.go:141] libmachine: Using API Version  1
	I0729 17:56:52.813028  111109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:52.813384  111109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:52.813577  111109 main.go:141] libmachine: (ha-794405-m04) Calling .GetIP
	I0729 17:56:52.816221  111109 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:52.816611  111109 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:52.816648  111109 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:52.816816  111109 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:56:52.817166  111109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:56:52.817205  111109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:56:52.831655  111109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45149
	I0729 17:56:52.832054  111109 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:56:52.832450  111109 main.go:141] libmachine: Using API Version  1
	I0729 17:56:52.832471  111109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:56:52.832751  111109 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:56:52.832958  111109 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 17:56:52.833143  111109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:56:52.833164  111109 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 17:56:52.835771  111109 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:52.836193  111109 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:56:52.836220  111109 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:56:52.836354  111109 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 17:56:52.836510  111109 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 17:56:52.836684  111109 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 17:56:52.836789  111109 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 17:56:52.916399  111109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:56:52.930756  111109 status.go:257] ha-794405-m04 status: &{Name:ha-794405-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr: exit status 7 (602.880474ms)

                                                
                                                
-- stdout --
	ha-794405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-794405-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:57:03.132003  111217 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:57:03.132102  111217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:57:03.132109  111217 out.go:304] Setting ErrFile to fd 2...
	I0729 17:57:03.132114  111217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:57:03.132276  111217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:57:03.132426  111217 out.go:298] Setting JSON to false
	I0729 17:57:03.132450  111217 mustload.go:65] Loading cluster: ha-794405
	I0729 17:57:03.132549  111217 notify.go:220] Checking for updates...
	I0729 17:57:03.132786  111217 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:57:03.132802  111217 status.go:255] checking status of ha-794405 ...
	I0729 17:57:03.133187  111217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:03.133254  111217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:03.148056  111217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
	I0729 17:57:03.148438  111217 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:03.149023  111217 main.go:141] libmachine: Using API Version  1
	I0729 17:57:03.149048  111217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:03.149376  111217 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:03.149546  111217 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:57:03.151203  111217 status.go:330] ha-794405 host status = "Running" (err=<nil>)
	I0729 17:57:03.151224  111217 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:57:03.151612  111217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:03.151669  111217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:03.166917  111217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0729 17:57:03.167368  111217 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:03.167837  111217 main.go:141] libmachine: Using API Version  1
	I0729 17:57:03.167860  111217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:03.168112  111217 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:03.168297  111217 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:57:03.171006  111217 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:57:03.171362  111217 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:57:03.171391  111217 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:57:03.171463  111217 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:57:03.171759  111217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:03.171791  111217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:03.186060  111217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I0729 17:57:03.186442  111217 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:03.186868  111217 main.go:141] libmachine: Using API Version  1
	I0729 17:57:03.186892  111217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:03.187225  111217 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:03.187404  111217 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:57:03.187617  111217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:57:03.187650  111217 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:57:03.190451  111217 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:57:03.190914  111217 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:57:03.190935  111217 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:57:03.191065  111217 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:57:03.191230  111217 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:57:03.191413  111217 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:57:03.191555  111217 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:57:03.273225  111217 ssh_runner.go:195] Run: systemctl --version
	I0729 17:57:03.279836  111217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:57:03.297456  111217 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:57:03.297495  111217 api_server.go:166] Checking apiserver status ...
	I0729 17:57:03.297533  111217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:57:03.311850  111217 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	W0729 17:57:03.322601  111217 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:57:03.322652  111217 ssh_runner.go:195] Run: ls
	I0729 17:57:03.327333  111217 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:57:03.331589  111217 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:57:03.331615  111217 status.go:422] ha-794405 apiserver status = Running (err=<nil>)
	I0729 17:57:03.331625  111217 status.go:257] ha-794405 status: &{Name:ha-794405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:57:03.331640  111217 status.go:255] checking status of ha-794405-m02 ...
	I0729 17:57:03.332035  111217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:03.332105  111217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:03.346901  111217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I0729 17:57:03.347284  111217 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:03.347765  111217 main.go:141] libmachine: Using API Version  1
	I0729 17:57:03.347790  111217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:03.348138  111217 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:03.348361  111217 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:57:03.349959  111217 status.go:330] ha-794405-m02 host status = "Stopped" (err=<nil>)
	I0729 17:57:03.349972  111217 status.go:343] host is not running, skipping remaining checks
	I0729 17:57:03.349980  111217 status.go:257] ha-794405-m02 status: &{Name:ha-794405-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:57:03.350001  111217 status.go:255] checking status of ha-794405-m03 ...
	I0729 17:57:03.350347  111217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:03.350396  111217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:03.364490  111217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0729 17:57:03.364848  111217 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:03.365301  111217 main.go:141] libmachine: Using API Version  1
	I0729 17:57:03.365322  111217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:03.365605  111217 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:03.365800  111217 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:57:03.367305  111217 status.go:330] ha-794405-m03 host status = "Running" (err=<nil>)
	I0729 17:57:03.367323  111217 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:57:03.367711  111217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:03.367758  111217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:03.382699  111217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42105
	I0729 17:57:03.383076  111217 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:03.383577  111217 main.go:141] libmachine: Using API Version  1
	I0729 17:57:03.383600  111217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:03.383919  111217 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:03.384113  111217 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:57:03.386788  111217 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:57:03.387194  111217 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:57:03.387221  111217 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:57:03.387307  111217 host.go:66] Checking if "ha-794405-m03" exists ...
	I0729 17:57:03.387604  111217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:03.387640  111217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:03.402288  111217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0729 17:57:03.402674  111217 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:03.403124  111217 main.go:141] libmachine: Using API Version  1
	I0729 17:57:03.403145  111217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:03.403535  111217 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:03.403694  111217 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:57:03.403910  111217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:57:03.403933  111217 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:57:03.406681  111217 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:57:03.407096  111217 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:57:03.407130  111217 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:57:03.407263  111217 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:57:03.407454  111217 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:57:03.407617  111217 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:57:03.407738  111217 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:57:03.488910  111217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:57:03.504951  111217 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 17:57:03.504982  111217 api_server.go:166] Checking apiserver status ...
	I0729 17:57:03.505037  111217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:57:03.518366  111217 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 17:57:03.527942  111217 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 17:57:03.528002  111217 ssh_runner.go:195] Run: ls
	I0729 17:57:03.532151  111217 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 17:57:03.536359  111217 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 17:57:03.536385  111217 status.go:422] ha-794405-m03 apiserver status = Running (err=<nil>)
	I0729 17:57:03.536394  111217 status.go:257] ha-794405-m03 status: &{Name:ha-794405-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 17:57:03.536408  111217 status.go:255] checking status of ha-794405-m04 ...
	I0729 17:57:03.536699  111217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:03.536731  111217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:03.552536  111217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0729 17:57:03.553020  111217 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:03.553471  111217 main.go:141] libmachine: Using API Version  1
	I0729 17:57:03.553490  111217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:03.553787  111217 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:03.553970  111217 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:57:03.555553  111217 status.go:330] ha-794405-m04 host status = "Running" (err=<nil>)
	I0729 17:57:03.555573  111217 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:57:03.555894  111217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:03.555941  111217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:03.570968  111217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I0729 17:57:03.571368  111217 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:03.571854  111217 main.go:141] libmachine: Using API Version  1
	I0729 17:57:03.571874  111217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:03.572142  111217 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:03.572295  111217 main.go:141] libmachine: (ha-794405-m04) Calling .GetIP
	I0729 17:57:03.575049  111217 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:57:03.575435  111217 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:57:03.575468  111217 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:57:03.575573  111217 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 17:57:03.575858  111217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:03.575892  111217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:03.590075  111217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0729 17:57:03.590484  111217 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:03.590934  111217 main.go:141] libmachine: Using API Version  1
	I0729 17:57:03.590956  111217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:03.591245  111217 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:03.591417  111217 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 17:57:03.591584  111217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 17:57:03.591604  111217 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 17:57:03.594168  111217 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:57:03.594583  111217 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:57:03.594604  111217 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:57:03.594762  111217 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 17:57:03.594918  111217 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 17:57:03.595045  111217 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 17:57:03.595185  111217 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 17:57:03.676225  111217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:57:03.690893  111217 status.go:257] ha-794405-m04 status: &{Name:ha-794405-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-794405 -n ha-794405
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-794405 logs -n 25: (1.381665521s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405:/home/docker/cp-test_ha-794405-m03_ha-794405.txt                       |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405 sudo cat                                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m03_ha-794405.txt                                 |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m02:/home/docker/cp-test_ha-794405-m03_ha-794405-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m02 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m03_ha-794405-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04:/home/docker/cp-test_ha-794405-m03_ha-794405-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m04 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m03_ha-794405-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp testdata/cp-test.txt                                                | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1800704997/001/cp-test_ha-794405-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405:/home/docker/cp-test_ha-794405-m04_ha-794405.txt                       |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405 sudo cat                                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405.txt                                 |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m02:/home/docker/cp-test_ha-794405-m04_ha-794405-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m02 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03:/home/docker/cp-test_ha-794405-m04_ha-794405-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m03 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-794405 node stop m02 -v=7                                                     | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-794405 node start m02 -v=7                                                    | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:49:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:49:02.826095  105708 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:49:02.826385  105708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:49:02.826396  105708 out.go:304] Setting ErrFile to fd 2...
	I0729 17:49:02.826400  105708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:49:02.826591  105708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:49:02.827147  105708 out.go:298] Setting JSON to false
	I0729 17:49:02.828119  105708 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9063,"bootTime":1722266280,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:49:02.828172  105708 start.go:139] virtualization: kvm guest
	I0729 17:49:02.830990  105708 out.go:177] * [ha-794405] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:49:02.832383  105708 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 17:49:02.832406  105708 notify.go:220] Checking for updates...
	I0729 17:49:02.834889  105708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:49:02.836265  105708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:49:02.837498  105708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:49:02.838698  105708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:49:02.839838  105708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:49:02.841175  105708 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:49:02.876993  105708 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 17:49:02.878394  105708 start.go:297] selected driver: kvm2
	I0729 17:49:02.878409  105708 start.go:901] validating driver "kvm2" against <nil>
	I0729 17:49:02.878421  105708 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:49:02.879446  105708 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:49:02.879522  105708 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:49:02.895099  105708 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:49:02.895149  105708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:49:02.895354  105708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:49:02.895408  105708 cni.go:84] Creating CNI manager for ""
	I0729 17:49:02.895419  105708 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 17:49:02.895426  105708 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 17:49:02.895481  105708 start.go:340] cluster config:
	{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 17:49:02.895575  105708 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:49:02.897380  105708 out.go:177] * Starting "ha-794405" primary control-plane node in "ha-794405" cluster
	I0729 17:49:02.898661  105708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:49:02.898696  105708 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:49:02.898706  105708 cache.go:56] Caching tarball of preloaded images
	I0729 17:49:02.898779  105708 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:49:02.898788  105708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:49:02.899135  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:49:02.899157  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json: {Name:mk30de7d0c2625e6321a17969a3dfd0d2dbdef3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:02.899281  105708 start.go:360] acquireMachinesLock for ha-794405: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:49:02.899307  105708 start.go:364] duration metric: took 14.682µs to acquireMachinesLock for "ha-794405"
	I0729 17:49:02.899323  105708 start.go:93] Provisioning new machine with config: &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:49:02.899386  105708 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 17:49:02.901032  105708 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:49:02.901232  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:49:02.901277  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:49:02.915591  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0729 17:49:02.916063  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:49:02.916573  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:49:02.916595  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:49:02.916895  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:49:02.917094  105708 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:49:02.917236  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:02.917370  105708 start.go:159] libmachine.API.Create for "ha-794405" (driver="kvm2")
	I0729 17:49:02.917400  105708 client.go:168] LocalClient.Create starting
	I0729 17:49:02.917445  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 17:49:02.917484  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:49:02.917508  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:49:02.917589  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 17:49:02.917627  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:49:02.917646  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:49:02.917668  105708 main.go:141] libmachine: Running pre-create checks...
	I0729 17:49:02.917687  105708 main.go:141] libmachine: (ha-794405) Calling .PreCreateCheck
	I0729 17:49:02.918008  105708 main.go:141] libmachine: (ha-794405) Calling .GetConfigRaw
	I0729 17:49:02.918405  105708 main.go:141] libmachine: Creating machine...
	I0729 17:49:02.918420  105708 main.go:141] libmachine: (ha-794405) Calling .Create
	I0729 17:49:02.918535  105708 main.go:141] libmachine: (ha-794405) Creating KVM machine...
	I0729 17:49:02.919868  105708 main.go:141] libmachine: (ha-794405) DBG | found existing default KVM network
	I0729 17:49:02.920566  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:02.920405  105731 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0729 17:49:02.920588  105708 main.go:141] libmachine: (ha-794405) DBG | created network xml: 
	I0729 17:49:02.920598  105708 main.go:141] libmachine: (ha-794405) DBG | <network>
	I0729 17:49:02.920608  105708 main.go:141] libmachine: (ha-794405) DBG |   <name>mk-ha-794405</name>
	I0729 17:49:02.920617  105708 main.go:141] libmachine: (ha-794405) DBG |   <dns enable='no'/>
	I0729 17:49:02.920627  105708 main.go:141] libmachine: (ha-794405) DBG |   
	I0729 17:49:02.920646  105708 main.go:141] libmachine: (ha-794405) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 17:49:02.920658  105708 main.go:141] libmachine: (ha-794405) DBG |     <dhcp>
	I0729 17:49:02.920668  105708 main.go:141] libmachine: (ha-794405) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 17:49:02.920680  105708 main.go:141] libmachine: (ha-794405) DBG |     </dhcp>
	I0729 17:49:02.920690  105708 main.go:141] libmachine: (ha-794405) DBG |   </ip>
	I0729 17:49:02.920701  105708 main.go:141] libmachine: (ha-794405) DBG |   
	I0729 17:49:02.920723  105708 main.go:141] libmachine: (ha-794405) DBG | </network>
	I0729 17:49:02.920736  105708 main.go:141] libmachine: (ha-794405) DBG | 
	I0729 17:49:02.925707  105708 main.go:141] libmachine: (ha-794405) DBG | trying to create private KVM network mk-ha-794405 192.168.39.0/24...
	I0729 17:49:02.992063  105708 main.go:141] libmachine: (ha-794405) DBG | private KVM network mk-ha-794405 192.168.39.0/24 created
	I0729 17:49:02.992097  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:02.992044  105731 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:49:02.992105  105708 main.go:141] libmachine: (ha-794405) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405 ...
	I0729 17:49:02.992115  105708 main.go:141] libmachine: (ha-794405) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 17:49:02.992160  105708 main.go:141] libmachine: (ha-794405) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 17:49:03.246791  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:03.246674  105731 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa...
	I0729 17:49:03.734433  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:03.734328  105731 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/ha-794405.rawdisk...
	I0729 17:49:03.734464  105708 main.go:141] libmachine: (ha-794405) DBG | Writing magic tar header
	I0729 17:49:03.734492  105708 main.go:141] libmachine: (ha-794405) DBG | Writing SSH key tar header
	I0729 17:49:03.734510  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:03.734433  105731 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405 ...
	I0729 17:49:03.734542  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405
	I0729 17:49:03.734564  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 17:49:03.734577  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405 (perms=drwx------)
	I0729 17:49:03.734588  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:49:03.734599  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 17:49:03.734605  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:49:03.734616  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:49:03.734621  105708 main.go:141] libmachine: (ha-794405) DBG | Checking permissions on dir: /home
	I0729 17:49:03.734630  105708 main.go:141] libmachine: (ha-794405) DBG | Skipping /home - not owner
	I0729 17:49:03.734653  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:49:03.734689  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 17:49:03.734702  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 17:49:03.734708  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:49:03.734716  105708 main.go:141] libmachine: (ha-794405) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:49:03.734724  105708 main.go:141] libmachine: (ha-794405) Creating domain...
	I0729 17:49:03.735727  105708 main.go:141] libmachine: (ha-794405) define libvirt domain using xml: 
	I0729 17:49:03.735748  105708 main.go:141] libmachine: (ha-794405) <domain type='kvm'>
	I0729 17:49:03.735756  105708 main.go:141] libmachine: (ha-794405)   <name>ha-794405</name>
	I0729 17:49:03.735767  105708 main.go:141] libmachine: (ha-794405)   <memory unit='MiB'>2200</memory>
	I0729 17:49:03.735775  105708 main.go:141] libmachine: (ha-794405)   <vcpu>2</vcpu>
	I0729 17:49:03.735786  105708 main.go:141] libmachine: (ha-794405)   <features>
	I0729 17:49:03.735794  105708 main.go:141] libmachine: (ha-794405)     <acpi/>
	I0729 17:49:03.735804  105708 main.go:141] libmachine: (ha-794405)     <apic/>
	I0729 17:49:03.735810  105708 main.go:141] libmachine: (ha-794405)     <pae/>
	I0729 17:49:03.735824  105708 main.go:141] libmachine: (ha-794405)     
	I0729 17:49:03.735831  105708 main.go:141] libmachine: (ha-794405)   </features>
	I0729 17:49:03.735836  105708 main.go:141] libmachine: (ha-794405)   <cpu mode='host-passthrough'>
	I0729 17:49:03.735871  105708 main.go:141] libmachine: (ha-794405)   
	I0729 17:49:03.735896  105708 main.go:141] libmachine: (ha-794405)   </cpu>
	I0729 17:49:03.735912  105708 main.go:141] libmachine: (ha-794405)   <os>
	I0729 17:49:03.735923  105708 main.go:141] libmachine: (ha-794405)     <type>hvm</type>
	I0729 17:49:03.735936  105708 main.go:141] libmachine: (ha-794405)     <boot dev='cdrom'/>
	I0729 17:49:03.735946  105708 main.go:141] libmachine: (ha-794405)     <boot dev='hd'/>
	I0729 17:49:03.735958  105708 main.go:141] libmachine: (ha-794405)     <bootmenu enable='no'/>
	I0729 17:49:03.735967  105708 main.go:141] libmachine: (ha-794405)   </os>
	I0729 17:49:03.735985  105708 main.go:141] libmachine: (ha-794405)   <devices>
	I0729 17:49:03.736005  105708 main.go:141] libmachine: (ha-794405)     <disk type='file' device='cdrom'>
	I0729 17:49:03.736030  105708 main.go:141] libmachine: (ha-794405)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/boot2docker.iso'/>
	I0729 17:49:03.736050  105708 main.go:141] libmachine: (ha-794405)       <target dev='hdc' bus='scsi'/>
	I0729 17:49:03.736063  105708 main.go:141] libmachine: (ha-794405)       <readonly/>
	I0729 17:49:03.736073  105708 main.go:141] libmachine: (ha-794405)     </disk>
	I0729 17:49:03.736085  105708 main.go:141] libmachine: (ha-794405)     <disk type='file' device='disk'>
	I0729 17:49:03.736097  105708 main.go:141] libmachine: (ha-794405)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:49:03.736108  105708 main.go:141] libmachine: (ha-794405)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/ha-794405.rawdisk'/>
	I0729 17:49:03.736117  105708 main.go:141] libmachine: (ha-794405)       <target dev='hda' bus='virtio'/>
	I0729 17:49:03.736133  105708 main.go:141] libmachine: (ha-794405)     </disk>
	I0729 17:49:03.736151  105708 main.go:141] libmachine: (ha-794405)     <interface type='network'>
	I0729 17:49:03.736164  105708 main.go:141] libmachine: (ha-794405)       <source network='mk-ha-794405'/>
	I0729 17:49:03.736175  105708 main.go:141] libmachine: (ha-794405)       <model type='virtio'/>
	I0729 17:49:03.736186  105708 main.go:141] libmachine: (ha-794405)     </interface>
	I0729 17:49:03.736196  105708 main.go:141] libmachine: (ha-794405)     <interface type='network'>
	I0729 17:49:03.736207  105708 main.go:141] libmachine: (ha-794405)       <source network='default'/>
	I0729 17:49:03.736221  105708 main.go:141] libmachine: (ha-794405)       <model type='virtio'/>
	I0729 17:49:03.736231  105708 main.go:141] libmachine: (ha-794405)     </interface>
	I0729 17:49:03.736241  105708 main.go:141] libmachine: (ha-794405)     <serial type='pty'>
	I0729 17:49:03.736252  105708 main.go:141] libmachine: (ha-794405)       <target port='0'/>
	I0729 17:49:03.736271  105708 main.go:141] libmachine: (ha-794405)     </serial>
	I0729 17:49:03.736284  105708 main.go:141] libmachine: (ha-794405)     <console type='pty'>
	I0729 17:49:03.736298  105708 main.go:141] libmachine: (ha-794405)       <target type='serial' port='0'/>
	I0729 17:49:03.736310  105708 main.go:141] libmachine: (ha-794405)     </console>
	I0729 17:49:03.736321  105708 main.go:141] libmachine: (ha-794405)     <rng model='virtio'>
	I0729 17:49:03.736334  105708 main.go:141] libmachine: (ha-794405)       <backend model='random'>/dev/random</backend>
	I0729 17:49:03.736343  105708 main.go:141] libmachine: (ha-794405)     </rng>
	I0729 17:49:03.736352  105708 main.go:141] libmachine: (ha-794405)     
	I0729 17:49:03.736358  105708 main.go:141] libmachine: (ha-794405)     
	I0729 17:49:03.736373  105708 main.go:141] libmachine: (ha-794405)   </devices>
	I0729 17:49:03.736389  105708 main.go:141] libmachine: (ha-794405) </domain>
	I0729 17:49:03.736406  105708 main.go:141] libmachine: (ha-794405) 
	I0729 17:49:03.740482  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:99:46:a4 in network default
	I0729 17:49:03.741062  105708 main.go:141] libmachine: (ha-794405) Ensuring networks are active...
	I0729 17:49:03.741080  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:03.741701  105708 main.go:141] libmachine: (ha-794405) Ensuring network default is active
	I0729 17:49:03.741942  105708 main.go:141] libmachine: (ha-794405) Ensuring network mk-ha-794405 is active
	I0729 17:49:03.742356  105708 main.go:141] libmachine: (ha-794405) Getting domain xml...
	I0729 17:49:03.743032  105708 main.go:141] libmachine: (ha-794405) Creating domain...
	I0729 17:49:04.055804  105708 main.go:141] libmachine: (ha-794405) Waiting to get IP...
	I0729 17:49:04.056778  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:04.057192  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:04.057231  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:04.057178  105731 retry.go:31] will retry after 205.96088ms: waiting for machine to come up
	I0729 17:49:04.264556  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:04.264963  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:04.264989  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:04.264940  105731 retry.go:31] will retry after 324.704809ms: waiting for machine to come up
	I0729 17:49:04.591370  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:04.591845  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:04.591872  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:04.591792  105731 retry.go:31] will retry after 405.573536ms: waiting for machine to come up
	I0729 17:49:04.999287  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:04.999748  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:04.999774  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:04.999721  105731 retry.go:31] will retry after 496.871109ms: waiting for machine to come up
	I0729 17:49:05.498405  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:05.498773  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:05.498810  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:05.498722  105731 retry.go:31] will retry after 510.903666ms: waiting for machine to come up
	I0729 17:49:06.011952  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:06.012359  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:06.012382  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:06.012319  105731 retry.go:31] will retry after 664.645855ms: waiting for machine to come up
	I0729 17:49:06.678052  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:06.678400  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:06.678431  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:06.678381  105731 retry.go:31] will retry after 1.124585448s: waiting for machine to come up
	I0729 17:49:07.804662  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:07.805145  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:07.805191  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:07.805120  105731 retry.go:31] will retry after 1.146972901s: waiting for machine to come up
	I0729 17:49:08.953966  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:08.954310  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:08.954343  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:08.954253  105731 retry.go:31] will retry after 1.280729444s: waiting for machine to come up
	I0729 17:49:10.236121  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:10.236479  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:10.236519  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:10.236446  105731 retry.go:31] will retry after 1.647758504s: waiting for machine to come up
	I0729 17:49:11.886214  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:11.886687  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:11.886718  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:11.886628  105731 retry.go:31] will retry after 2.347847077s: waiting for machine to come up
	I0729 17:49:14.235798  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:14.236227  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:14.236269  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:14.236189  105731 retry.go:31] will retry after 2.690373484s: waiting for machine to come up
	I0729 17:49:16.929828  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:16.930286  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:16.930313  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:16.930236  105731 retry.go:31] will retry after 3.511637453s: waiting for machine to come up
	I0729 17:49:20.445378  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:20.445822  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find current IP address of domain ha-794405 in network mk-ha-794405
	I0729 17:49:20.445846  105708 main.go:141] libmachine: (ha-794405) DBG | I0729 17:49:20.445780  105731 retry.go:31] will retry after 5.302806771s: waiting for machine to come up
	I0729 17:49:25.751979  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:25.752379  105708 main.go:141] libmachine: (ha-794405) Found IP for machine: 192.168.39.102
	I0729 17:49:25.752399  105708 main.go:141] libmachine: (ha-794405) Reserving static IP address...
	I0729 17:49:25.752413  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has current primary IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:25.752726  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find host DHCP lease matching {name: "ha-794405", mac: "52:54:00:a5:77:cc", ip: "192.168.39.102"} in network mk-ha-794405
	I0729 17:49:25.824491  105708 main.go:141] libmachine: (ha-794405) Reserved static IP address: 192.168.39.102
	I0729 17:49:25.824523  105708 main.go:141] libmachine: (ha-794405) Waiting for SSH to be available...
	I0729 17:49:25.824534  105708 main.go:141] libmachine: (ha-794405) DBG | Getting to WaitForSSH function...
	I0729 17:49:25.827117  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:25.827416  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405
	I0729 17:49:25.827444  105708 main.go:141] libmachine: (ha-794405) DBG | unable to find defined IP address of network mk-ha-794405 interface with MAC address 52:54:00:a5:77:cc
	I0729 17:49:25.827525  105708 main.go:141] libmachine: (ha-794405) DBG | Using SSH client type: external
	I0729 17:49:25.827549  105708 main.go:141] libmachine: (ha-794405) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa (-rw-------)
	I0729 17:49:25.827610  105708 main.go:141] libmachine: (ha-794405) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:49:25.827638  105708 main.go:141] libmachine: (ha-794405) DBG | About to run SSH command:
	I0729 17:49:25.827655  105708 main.go:141] libmachine: (ha-794405) DBG | exit 0
	I0729 17:49:25.831028  105708 main.go:141] libmachine: (ha-794405) DBG | SSH cmd err, output: exit status 255: 
	I0729 17:49:25.831053  105708 main.go:141] libmachine: (ha-794405) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 17:49:25.831064  105708 main.go:141] libmachine: (ha-794405) DBG | command : exit 0
	I0729 17:49:25.831075  105708 main.go:141] libmachine: (ha-794405) DBG | err     : exit status 255
	I0729 17:49:25.831089  105708 main.go:141] libmachine: (ha-794405) DBG | output  : 
	I0729 17:49:28.831960  105708 main.go:141] libmachine: (ha-794405) DBG | Getting to WaitForSSH function...
	I0729 17:49:28.834145  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:28.834488  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:28.834515  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:28.834643  105708 main.go:141] libmachine: (ha-794405) DBG | Using SSH client type: external
	I0729 17:49:28.834681  105708 main.go:141] libmachine: (ha-794405) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa (-rw-------)
	I0729 17:49:28.834717  105708 main.go:141] libmachine: (ha-794405) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:49:28.834733  105708 main.go:141] libmachine: (ha-794405) DBG | About to run SSH command:
	I0729 17:49:28.834747  105708 main.go:141] libmachine: (ha-794405) DBG | exit 0
	I0729 17:49:28.956655  105708 main.go:141] libmachine: (ha-794405) DBG | SSH cmd err, output: <nil>: 
	I0729 17:49:28.956903  105708 main.go:141] libmachine: (ha-794405) KVM machine creation complete!
	I0729 17:49:28.957190  105708 main.go:141] libmachine: (ha-794405) Calling .GetConfigRaw
	I0729 17:49:28.957795  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:28.957991  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:28.958136  105708 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:49:28.958148  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:49:28.959385  105708 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:49:28.959398  105708 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:49:28.959405  105708 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:49:28.959410  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:28.961561  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:28.961911  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:28.961932  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:28.962100  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:28.962260  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:28.962441  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:28.962594  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:28.962762  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:28.962948  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:28.962957  105708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:49:29.063890  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:49:29.063917  105708 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:49:29.063927  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.066506  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.066824  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.066855  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.066976  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:29.067164  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.067335  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.067471  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:29.067652  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:29.067852  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:29.067867  105708 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:49:29.169299  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:49:29.169418  105708 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:49:29.169442  105708 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:49:29.169472  105708 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:49:29.169723  105708 buildroot.go:166] provisioning hostname "ha-794405"
	I0729 17:49:29.169753  105708 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:49:29.169967  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.172330  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.172670  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.172692  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.172838  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:29.173021  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.173179  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.173313  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:29.173456  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:29.173621  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:29.173634  105708 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-794405 && echo "ha-794405" | sudo tee /etc/hostname
	I0729 17:49:29.287535  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-794405
	
	I0729 17:49:29.287562  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.290362  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.290718  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.290748  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.290888  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:29.291060  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.291260  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.291385  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:29.291529  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:29.291732  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:29.291756  105708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-794405' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-794405/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-794405' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:49:29.401468  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:49:29.401496  105708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 17:49:29.401551  105708 buildroot.go:174] setting up certificates
	I0729 17:49:29.401563  105708 provision.go:84] configureAuth start
	I0729 17:49:29.401574  105708 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:49:29.401886  105708 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:49:29.404405  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.404737  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.404759  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.404925  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.407032  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.407332  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.407354  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.407459  105708 provision.go:143] copyHostCerts
	I0729 17:49:29.407490  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:49:29.407538  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 17:49:29.407547  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:49:29.407623  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 17:49:29.407745  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:49:29.407776  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 17:49:29.407785  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:49:29.407821  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 17:49:29.407923  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:49:29.407949  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 17:49:29.407959  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:49:29.407994  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 17:49:29.408061  105708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.ha-794405 san=[127.0.0.1 192.168.39.102 ha-794405 localhost minikube]
	I0729 17:49:29.582277  105708 provision.go:177] copyRemoteCerts
	I0729 17:49:29.582350  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:49:29.582379  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.584803  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.585095  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.585120  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.585246  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:29.585386  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.585595  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:29.585742  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:49:29.666289  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:49:29.666361  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:49:29.689260  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:49:29.689314  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 17:49:29.711383  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:49:29.711435  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:49:29.733565  105708 provision.go:87] duration metric: took 331.99164ms to configureAuth
	I0729 17:49:29.733587  105708 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:49:29.733753  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:49:29.733831  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:29.736447  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.736759  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:29.736789  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:29.736969  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:29.737139  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.737314  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:29.737459  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:29.737632  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:29.737790  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:29.737809  105708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:49:30.018714  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:49:30.018744  105708 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:49:30.018754  105708 main.go:141] libmachine: (ha-794405) Calling .GetURL
	I0729 17:49:30.020110  105708 main.go:141] libmachine: (ha-794405) DBG | Using libvirt version 6000000
	I0729 17:49:30.022350  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.022691  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.022708  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.022868  105708 main.go:141] libmachine: Docker is up and running!
	I0729 17:49:30.022891  105708 main.go:141] libmachine: Reticulating splines...
	I0729 17:49:30.022899  105708 client.go:171] duration metric: took 27.10548559s to LocalClient.Create
	I0729 17:49:30.022921  105708 start.go:167] duration metric: took 27.10555277s to libmachine.API.Create "ha-794405"
	I0729 17:49:30.022934  105708 start.go:293] postStartSetup for "ha-794405" (driver="kvm2")
	I0729 17:49:30.022954  105708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:49:30.022976  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:30.023222  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:49:30.023253  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:30.025417  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.025743  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.025766  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.025928  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:30.026124  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:30.026283  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:30.026433  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:49:30.106738  105708 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:49:30.110753  105708 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:49:30.110784  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 17:49:30.110834  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 17:49:30.110921  105708 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 17:49:30.110935  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /etc/ssl/certs/952822.pem
	I0729 17:49:30.111028  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:49:30.119685  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:49:30.141793  105708 start.go:296] duration metric: took 118.84673ms for postStartSetup
	I0729 17:49:30.141841  105708 main.go:141] libmachine: (ha-794405) Calling .GetConfigRaw
	I0729 17:49:30.142370  105708 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:49:30.145013  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.145400  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.145424  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.145667  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:49:30.145826  105708 start.go:128] duration metric: took 27.246430846s to createHost
	I0729 17:49:30.145848  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:30.147850  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.148123  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.148148  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.148271  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:30.148560  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:30.148723  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:30.148896  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:30.149066  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:49:30.149290  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:49:30.149302  105708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:49:30.249133  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275370.230035605
	
	I0729 17:49:30.249158  105708 fix.go:216] guest clock: 1722275370.230035605
	I0729 17:49:30.249167  105708 fix.go:229] Guest: 2024-07-29 17:49:30.230035605 +0000 UTC Remote: 2024-07-29 17:49:30.145838608 +0000 UTC m=+27.355399708 (delta=84.196997ms)
	I0729 17:49:30.249187  105708 fix.go:200] guest clock delta is within tolerance: 84.196997ms
	I0729 17:49:30.249192  105708 start.go:83] releasing machines lock for "ha-794405", held for 27.349876645s
	I0729 17:49:30.249218  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:30.249490  105708 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:49:30.251823  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.252165  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.252199  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.252383  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:30.252876  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:30.253056  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:49:30.253278  105708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:49:30.253347  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:30.253283  105708 ssh_runner.go:195] Run: cat /version.json
	I0729 17:49:30.253410  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:49:30.256015  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.256305  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.256459  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.256482  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.256586  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:30.256608  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:30.256626  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:30.256763  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:30.256788  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:49:30.256949  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:49:30.256960  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:30.257138  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:49:30.257128  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:49:30.257295  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:49:30.355760  105708 ssh_runner.go:195] Run: systemctl --version
	I0729 17:49:30.361465  105708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:49:30.513451  105708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:49:30.520499  105708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:49:30.520676  105708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:49:30.537130  105708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:49:30.537153  105708 start.go:495] detecting cgroup driver to use...
	I0729 17:49:30.537214  105708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:49:30.552663  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:49:30.566114  105708 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:49:30.566173  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:49:30.579553  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:49:30.593111  105708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:49:30.699759  105708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:49:30.852778  105708 docker.go:233] disabling docker service ...
	I0729 17:49:30.852877  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:49:30.866825  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:49:30.879979  105708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:49:31.005064  105708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:49:31.128047  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:49:31.141567  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:49:31.159589  105708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:49:31.159659  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.169610  105708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:49:31.169667  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.179745  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.190025  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.200741  105708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:49:31.211700  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.222206  105708 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.239291  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:49:31.249523  105708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:49:31.258669  105708 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:49:31.258723  105708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:49:31.270748  105708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:49:31.279746  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:49:31.398598  105708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:49:31.541730  105708 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:49:31.541811  105708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:49:31.547368  105708 start.go:563] Will wait 60s for crictl version
	I0729 17:49:31.547425  105708 ssh_runner.go:195] Run: which crictl
	I0729 17:49:31.551142  105708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:49:31.591665  105708 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:49:31.591752  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:49:31.618720  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:49:31.651924  105708 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:49:31.653214  105708 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:49:31.655590  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:31.655858  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:49:31.655888  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:49:31.656049  105708 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:49:31.660141  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:49:31.673311  105708 kubeadm.go:883] updating cluster {Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 17:49:31.673412  105708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:49:31.673451  105708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:49:31.705175  105708 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 17:49:31.705232  105708 ssh_runner.go:195] Run: which lz4
	I0729 17:49:31.708904  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 17:49:31.708980  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 17:49:31.712793  105708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 17:49:31.712821  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 17:49:33.062759  105708 crio.go:462] duration metric: took 1.353792868s to copy over tarball
	I0729 17:49:33.062838  105708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 17:49:35.126738  105708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.063865498s)
	I0729 17:49:35.126767  105708 crio.go:469] duration metric: took 2.06397882s to extract the tarball
	I0729 17:49:35.126776  105708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 17:49:35.164202  105708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 17:49:35.210338  105708 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 17:49:35.210361  105708 cache_images.go:84] Images are preloaded, skipping loading
	I0729 17:49:35.210369  105708 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0729 17:49:35.210476  105708 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-794405 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:49:35.210543  105708 ssh_runner.go:195] Run: crio config
	I0729 17:49:35.260156  105708 cni.go:84] Creating CNI manager for ""
	I0729 17:49:35.260185  105708 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 17:49:35.260197  105708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 17:49:35.260224  105708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-794405 NodeName:ha-794405 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 17:49:35.260425  105708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-794405"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 17:49:35.260458  105708 kube-vip.go:115] generating kube-vip config ...
	I0729 17:49:35.260531  105708 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:49:35.276684  105708 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:49:35.276790  105708 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:49:35.276849  105708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:49:35.286712  105708 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 17:49:35.286768  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 17:49:35.295906  105708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 17:49:35.311637  105708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:49:35.327414  105708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 17:49:35.343043  105708 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 17:49:35.359377  105708 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:49:35.363291  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:49:35.375704  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:49:35.489508  105708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:49:35.505445  105708 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405 for IP: 192.168.39.102
	I0729 17:49:35.505475  105708 certs.go:194] generating shared ca certs ...
	I0729 17:49:35.505496  105708 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:35.505692  105708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 17:49:35.505757  105708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 17:49:35.505772  105708 certs.go:256] generating profile certs ...
	I0729 17:49:35.505836  105708 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key
	I0729 17:49:35.505853  105708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt with IP's: []
	I0729 17:49:35.801022  105708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt ...
	I0729 17:49:35.801067  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt: {Name:mkb8dd0c0c2d582f5ff5bb1fee374e0e6a310340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:35.801267  105708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key ...
	I0729 17:49:35.801285  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key: {Name:mkd4acd873e144301116c0340b52fa7490e94eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:35.801393  105708 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.778a7140
	I0729 17:49:35.801412  105708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.778a7140 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I0729 17:49:35.924277  105708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.778a7140 ...
	I0729 17:49:35.924310  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.778a7140: {Name:mk0d46e1c11a2b050eaf1c974c78ccbcd4025fec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:35.924476  105708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.778a7140 ...
	I0729 17:49:35.924493  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.778a7140: {Name:mk1d617f4ecae50f4a793285b8a14d10a8917d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:35.924595  105708 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.778a7140 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt
	I0729 17:49:35.924715  105708 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.778a7140 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key
	I0729 17:49:35.924798  105708 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key
	I0729 17:49:35.924820  105708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt with IP's: []
	I0729 17:49:36.012100  105708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt ...
	I0729 17:49:36.012133  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt: {Name:mk5dfd47a29e68c44b7150fb205a8b9651147a30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:36.012301  105708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key ...
	I0729 17:49:36.012317  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key: {Name:mk9cf98a9eefaddd0bc8e7780f0dd63ef76e3e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:49:36.012411  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:49:36.012434  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:49:36.012450  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:49:36.012466  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:49:36.012482  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:49:36.012501  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:49:36.012519  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:49:36.012544  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:49:36.012608  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 17:49:36.012657  105708 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 17:49:36.012671  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:49:36.012707  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:49:36.012740  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:49:36.012774  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 17:49:36.012832  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:49:36.012914  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem -> /usr/share/ca-certificates/95282.pem
	I0729 17:49:36.012941  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /usr/share/ca-certificates/952822.pem
	I0729 17:49:36.012958  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:49:36.013570  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:49:36.039509  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:49:36.063983  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:49:36.088319  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:49:36.112761  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 17:49:36.136058  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:49:36.159574  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:49:36.182816  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:49:36.205661  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 17:49:36.228619  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 17:49:36.251370  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:49:36.276742  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 17:49:36.296764  105708 ssh_runner.go:195] Run: openssl version
	I0729 17:49:36.307903  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 17:49:36.324801  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 17:49:36.330455  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 17:49:36.330518  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 17:49:36.336947  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:49:36.347667  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:49:36.358240  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:49:36.362571  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:49:36.362645  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:49:36.368228  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:49:36.380400  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 17:49:36.391176  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 17:49:36.395579  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 17:49:36.395643  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 17:49:36.401174  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 17:49:36.413631  105708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:49:36.417843  105708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:49:36.417890  105708 kubeadm.go:392] StartCluster: {Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:49:36.417959  105708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 17:49:36.417994  105708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 17:49:36.458098  105708 cri.go:89] found id: ""
	I0729 17:49:36.458164  105708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 17:49:36.468928  105708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 17:49:36.479175  105708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 17:49:36.490020  105708 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 17:49:36.490036  105708 kubeadm.go:157] found existing configuration files:
	
	I0729 17:49:36.490101  105708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 17:49:36.498544  105708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 17:49:36.498591  105708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 17:49:36.507258  105708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 17:49:36.515477  105708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 17:49:36.515537  105708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 17:49:36.524129  105708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 17:49:36.532366  105708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 17:49:36.532404  105708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 17:49:36.540843  105708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 17:49:36.548912  105708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 17:49:36.548950  105708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 17:49:36.557445  105708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 17:49:36.784593  105708 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 17:49:47.860353  105708 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 17:49:47.860432  105708 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 17:49:47.860544  105708 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 17:49:47.860678  105708 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 17:49:47.860804  105708 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 17:49:47.860923  105708 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 17:49:47.862388  105708 out.go:204]   - Generating certificates and keys ...
	I0729 17:49:47.862465  105708 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 17:49:47.862522  105708 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 17:49:47.862596  105708 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 17:49:47.862648  105708 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 17:49:47.862719  105708 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 17:49:47.862814  105708 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 17:49:47.862884  105708 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 17:49:47.863008  105708 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-794405 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0729 17:49:47.863066  105708 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 17:49:47.863176  105708 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-794405 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0729 17:49:47.863233  105708 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 17:49:47.863291  105708 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 17:49:47.863329  105708 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 17:49:47.863425  105708 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 17:49:47.863496  105708 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 17:49:47.863544  105708 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 17:49:47.863600  105708 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 17:49:47.863681  105708 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 17:49:47.863757  105708 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 17:49:47.863885  105708 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 17:49:47.863946  105708 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 17:49:47.865259  105708 out.go:204]   - Booting up control plane ...
	I0729 17:49:47.865344  105708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 17:49:47.865408  105708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 17:49:47.865467  105708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 17:49:47.865576  105708 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 17:49:47.865708  105708 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 17:49:47.865773  105708 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 17:49:47.865925  105708 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 17:49:47.866016  105708 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 17:49:47.866078  105708 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.688229ms
	I0729 17:49:47.866141  105708 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 17:49:47.866191  105708 kubeadm.go:310] [api-check] The API server is healthy after 5.882551655s
	I0729 17:49:47.866307  105708 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 17:49:47.866494  105708 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 17:49:47.866568  105708 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 17:49:47.866717  105708 kubeadm.go:310] [mark-control-plane] Marking the node ha-794405 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 17:49:47.866792  105708 kubeadm.go:310] [bootstrap-token] Using token: f793nk.j9zxoiw0utdua39g
	I0729 17:49:47.868137  105708 out.go:204]   - Configuring RBAC rules ...
	I0729 17:49:47.868241  105708 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 17:49:47.868310  105708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 17:49:47.868428  105708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 17:49:47.868535  105708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 17:49:47.868674  105708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 17:49:47.868804  105708 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 17:49:47.868972  105708 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 17:49:47.869022  105708 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 17:49:47.869061  105708 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 17:49:47.869067  105708 kubeadm.go:310] 
	I0729 17:49:47.869116  105708 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 17:49:47.869122  105708 kubeadm.go:310] 
	I0729 17:49:47.869199  105708 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 17:49:47.869213  105708 kubeadm.go:310] 
	I0729 17:49:47.869258  105708 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 17:49:47.869309  105708 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 17:49:47.869352  105708 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 17:49:47.869362  105708 kubeadm.go:310] 
	I0729 17:49:47.869405  105708 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 17:49:47.869411  105708 kubeadm.go:310] 
	I0729 17:49:47.869453  105708 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 17:49:47.869459  105708 kubeadm.go:310] 
	I0729 17:49:47.869505  105708 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 17:49:47.869569  105708 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 17:49:47.869626  105708 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 17:49:47.869632  105708 kubeadm.go:310] 
	I0729 17:49:47.869702  105708 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 17:49:47.869765  105708 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 17:49:47.869771  105708 kubeadm.go:310] 
	I0729 17:49:47.869865  105708 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f793nk.j9zxoiw0utdua39g \
	I0729 17:49:47.869991  105708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f \
	I0729 17:49:47.870023  105708 kubeadm.go:310] 	--control-plane 
	I0729 17:49:47.870035  105708 kubeadm.go:310] 
	I0729 17:49:47.870146  105708 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 17:49:47.870155  105708 kubeadm.go:310] 
	I0729 17:49:47.870255  105708 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f793nk.j9zxoiw0utdua39g \
	I0729 17:49:47.870352  105708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f 
	I0729 17:49:47.870363  105708 cni.go:84] Creating CNI manager for ""
	I0729 17:49:47.870369  105708 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 17:49:47.871961  105708 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 17:49:47.873106  105708 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 17:49:47.878863  105708 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 17:49:47.878884  105708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 17:49:47.898096  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 17:49:48.273726  105708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 17:49:48.273833  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:48.273847  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-794405 minikube.k8s.io/updated_at=2024_07_29T17_49_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=ha-794405 minikube.k8s.io/primary=true
	I0729 17:49:48.392193  105708 ops.go:34] apiserver oom_adj: -16
	I0729 17:49:48.397487  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:48.898351  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:49.397647  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:49.898315  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:50.397596  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:50.898011  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:51.398316  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:51.897916  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:52.398054  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:52.898459  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:53.397571  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:53.898087  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:54.398327  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:54.898300  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:55.397773  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:55.897494  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:56.397574  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:56.897575  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:57.398162  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:57.898284  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:58.397866  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:58.897616  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:59.398293  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:49:59.897528  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:50:00.398548  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:50:00.898527  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 17:50:00.991403  105708 kubeadm.go:1113] duration metric: took 12.717647136s to wait for elevateKubeSystemPrivileges
	I0729 17:50:00.991435  105708 kubeadm.go:394] duration metric: took 24.573549363s to StartCluster
	I0729 17:50:00.991454  105708 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:50:00.991544  105708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:50:00.992360  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:50:00.992634  105708 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:50:00.992628  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 17:50:00.992674  105708 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 17:50:00.992734  105708 addons.go:69] Setting storage-provisioner=true in profile "ha-794405"
	I0729 17:50:00.992755  105708 addons.go:234] Setting addon storage-provisioner=true in "ha-794405"
	I0729 17:50:00.992658  105708 start.go:241] waiting for startup goroutines ...
	I0729 17:50:00.992781  105708 addons.go:69] Setting default-storageclass=true in profile "ha-794405"
	I0729 17:50:00.992798  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:50:00.992833  105708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-794405"
	I0729 17:50:00.992958  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:00.993262  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:00.993295  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:00.993308  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:00.993337  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:01.008821  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0729 17:50:01.008899  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I0729 17:50:01.009407  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.009474  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.009972  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.009979  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.009995  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.010024  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.010319  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.010325  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.010485  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:50:01.010906  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:01.010948  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:01.012581  105708 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:50:01.012953  105708 kapi.go:59] client config for ha-794405: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt", KeyFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key", CAFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 17:50:01.013422  105708 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 17:50:01.013585  105708 addons.go:234] Setting addon default-storageclass=true in "ha-794405"
	I0729 17:50:01.013629  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:50:01.013895  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:01.013932  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:01.025602  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40799
	I0729 17:50:01.025987  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.026483  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.026508  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.026866  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.027060  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:50:01.028244  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
	I0729 17:50:01.028631  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.028748  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:50:01.029562  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.029585  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.030995  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.031213  105708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 17:50:01.031542  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:01.031572  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:01.032627  105708 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:50:01.032648  105708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 17:50:01.032669  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:50:01.035387  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:01.035811  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:50:01.035839  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:01.036001  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:50:01.036171  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:50:01.036305  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:50:01.036437  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:50:01.046281  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46543
	I0729 17:50:01.046677  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.047085  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.047105  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.047404  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.047567  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:50:01.048792  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:50:01.048991  105708 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 17:50:01.049007  105708 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 17:50:01.049023  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:50:01.051432  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:01.051810  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:50:01.051838  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:01.051964  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:50:01.052128  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:50:01.052254  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:50:01.052371  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:50:01.160458  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 17:50:01.173405  105708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 17:50:01.227718  105708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 17:50:01.652915  105708 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 17:50:01.653003  105708 main.go:141] libmachine: Making call to close driver server
	I0729 17:50:01.653030  105708 main.go:141] libmachine: (ha-794405) Calling .Close
	I0729 17:50:01.653359  105708 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:50:01.653379  105708 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:50:01.653380  105708 main.go:141] libmachine: (ha-794405) DBG | Closing plugin on server side
	I0729 17:50:01.653388  105708 main.go:141] libmachine: Making call to close driver server
	I0729 17:50:01.653397  105708 main.go:141] libmachine: (ha-794405) Calling .Close
	I0729 17:50:01.653646  105708 main.go:141] libmachine: (ha-794405) DBG | Closing plugin on server side
	I0729 17:50:01.653679  105708 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:50:01.653687  105708 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:50:01.653818  105708 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 17:50:01.653830  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:01.653841  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:01.653847  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:01.662935  105708 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0729 17:50:01.663732  105708 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 17:50:01.663751  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:01.663762  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:01.663770  105708 round_trippers.go:473]     Content-Type: application/json
	I0729 17:50:01.663776  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:01.666393  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:50:01.666546  105708 main.go:141] libmachine: Making call to close driver server
	I0729 17:50:01.666564  105708 main.go:141] libmachine: (ha-794405) Calling .Close
	I0729 17:50:01.666814  105708 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:50:01.666834  105708 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:50:01.915772  105708 main.go:141] libmachine: Making call to close driver server
	I0729 17:50:01.915797  105708 main.go:141] libmachine: (ha-794405) Calling .Close
	I0729 17:50:01.916079  105708 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:50:01.916103  105708 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:50:01.916113  105708 main.go:141] libmachine: Making call to close driver server
	I0729 17:50:01.916122  105708 main.go:141] libmachine: (ha-794405) Calling .Close
	I0729 17:50:01.916200  105708 main.go:141] libmachine: (ha-794405) DBG | Closing plugin on server side
	I0729 17:50:01.916373  105708 main.go:141] libmachine: Successfully made call to close driver server
	I0729 17:50:01.916390  105708 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 17:50:01.918156  105708 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 17:50:01.919363  105708 addons.go:510] duration metric: took 926.694377ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 17:50:01.919397  105708 start.go:246] waiting for cluster config update ...
	I0729 17:50:01.919413  105708 start.go:255] writing updated cluster config ...
	I0729 17:50:01.921100  105708 out.go:177] 
	I0729 17:50:01.922789  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:01.922990  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:50:01.925081  105708 out.go:177] * Starting "ha-794405-m02" control-plane node in "ha-794405" cluster
	I0729 17:50:01.926217  105708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:50:01.926241  105708 cache.go:56] Caching tarball of preloaded images
	I0729 17:50:01.926333  105708 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:50:01.926344  105708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:50:01.926405  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:50:01.926557  105708 start.go:360] acquireMachinesLock for ha-794405-m02: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:50:01.926596  105708 start.go:364] duration metric: took 21.492µs to acquireMachinesLock for "ha-794405-m02"
	I0729 17:50:01.926624  105708 start.go:93] Provisioning new machine with config: &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:50:01.926695  105708 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 17:50:01.928252  105708 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:50:01.928329  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:01.928356  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:01.943467  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45805
	I0729 17:50:01.943900  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:01.944469  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:01.944498  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:01.944878  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:01.945184  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetMachineName
	I0729 17:50:01.945341  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:01.945526  105708 start.go:159] libmachine.API.Create for "ha-794405" (driver="kvm2")
	I0729 17:50:01.945554  105708 client.go:168] LocalClient.Create starting
	I0729 17:50:01.945600  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 17:50:01.945644  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:50:01.945664  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:50:01.945739  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 17:50:01.945767  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:50:01.945783  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:50:01.945809  105708 main.go:141] libmachine: Running pre-create checks...
	I0729 17:50:01.945826  105708 main.go:141] libmachine: (ha-794405-m02) Calling .PreCreateCheck
	I0729 17:50:01.946000  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetConfigRaw
	I0729 17:50:01.946430  105708 main.go:141] libmachine: Creating machine...
	I0729 17:50:01.946447  105708 main.go:141] libmachine: (ha-794405-m02) Calling .Create
	I0729 17:50:01.946563  105708 main.go:141] libmachine: (ha-794405-m02) Creating KVM machine...
	I0729 17:50:01.947893  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found existing default KVM network
	I0729 17:50:01.947995  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found existing private KVM network mk-ha-794405
	I0729 17:50:01.948183  105708 main.go:141] libmachine: (ha-794405-m02) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02 ...
	I0729 17:50:01.948210  105708 main.go:141] libmachine: (ha-794405-m02) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 17:50:01.948255  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:01.948157  106097 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:50:01.948341  105708 main.go:141] libmachine: (ha-794405-m02) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 17:50:02.206815  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:02.206692  106097 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa...
	I0729 17:50:02.429331  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:02.429205  106097 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/ha-794405-m02.rawdisk...
	I0729 17:50:02.429374  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Writing magic tar header
	I0729 17:50:02.429389  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Writing SSH key tar header
	I0729 17:50:02.429401  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:02.429357  106097 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02 ...
	I0729 17:50:02.429523  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02
	I0729 17:50:02.429579  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 17:50:02.429596  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02 (perms=drwx------)
	I0729 17:50:02.429612  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:50:02.429626  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 17:50:02.429640  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:50:02.429657  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 17:50:02.429667  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:50:02.429681  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:50:02.429692  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Checking permissions on dir: /home
	I0729 17:50:02.429716  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 17:50:02.429738  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Skipping /home - not owner
	I0729 17:50:02.429751  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:50:02.429766  105708 main.go:141] libmachine: (ha-794405-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:50:02.429776  105708 main.go:141] libmachine: (ha-794405-m02) Creating domain...
	I0729 17:50:02.430552  105708 main.go:141] libmachine: (ha-794405-m02) define libvirt domain using xml: 
	I0729 17:50:02.430567  105708 main.go:141] libmachine: (ha-794405-m02) <domain type='kvm'>
	I0729 17:50:02.430574  105708 main.go:141] libmachine: (ha-794405-m02)   <name>ha-794405-m02</name>
	I0729 17:50:02.430579  105708 main.go:141] libmachine: (ha-794405-m02)   <memory unit='MiB'>2200</memory>
	I0729 17:50:02.430584  105708 main.go:141] libmachine: (ha-794405-m02)   <vcpu>2</vcpu>
	I0729 17:50:02.430588  105708 main.go:141] libmachine: (ha-794405-m02)   <features>
	I0729 17:50:02.430593  105708 main.go:141] libmachine: (ha-794405-m02)     <acpi/>
	I0729 17:50:02.430597  105708 main.go:141] libmachine: (ha-794405-m02)     <apic/>
	I0729 17:50:02.430602  105708 main.go:141] libmachine: (ha-794405-m02)     <pae/>
	I0729 17:50:02.430615  105708 main.go:141] libmachine: (ha-794405-m02)     
	I0729 17:50:02.430623  105708 main.go:141] libmachine: (ha-794405-m02)   </features>
	I0729 17:50:02.430634  105708 main.go:141] libmachine: (ha-794405-m02)   <cpu mode='host-passthrough'>
	I0729 17:50:02.430641  105708 main.go:141] libmachine: (ha-794405-m02)   
	I0729 17:50:02.430650  105708 main.go:141] libmachine: (ha-794405-m02)   </cpu>
	I0729 17:50:02.430658  105708 main.go:141] libmachine: (ha-794405-m02)   <os>
	I0729 17:50:02.430667  105708 main.go:141] libmachine: (ha-794405-m02)     <type>hvm</type>
	I0729 17:50:02.430675  105708 main.go:141] libmachine: (ha-794405-m02)     <boot dev='cdrom'/>
	I0729 17:50:02.430684  105708 main.go:141] libmachine: (ha-794405-m02)     <boot dev='hd'/>
	I0729 17:50:02.430691  105708 main.go:141] libmachine: (ha-794405-m02)     <bootmenu enable='no'/>
	I0729 17:50:02.430701  105708 main.go:141] libmachine: (ha-794405-m02)   </os>
	I0729 17:50:02.430711  105708 main.go:141] libmachine: (ha-794405-m02)   <devices>
	I0729 17:50:02.430721  105708 main.go:141] libmachine: (ha-794405-m02)     <disk type='file' device='cdrom'>
	I0729 17:50:02.430737  105708 main.go:141] libmachine: (ha-794405-m02)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/boot2docker.iso'/>
	I0729 17:50:02.430749  105708 main.go:141] libmachine: (ha-794405-m02)       <target dev='hdc' bus='scsi'/>
	I0729 17:50:02.430759  105708 main.go:141] libmachine: (ha-794405-m02)       <readonly/>
	I0729 17:50:02.430770  105708 main.go:141] libmachine: (ha-794405-m02)     </disk>
	I0729 17:50:02.430779  105708 main.go:141] libmachine: (ha-794405-m02)     <disk type='file' device='disk'>
	I0729 17:50:02.430805  105708 main.go:141] libmachine: (ha-794405-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:50:02.430822  105708 main.go:141] libmachine: (ha-794405-m02)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/ha-794405-m02.rawdisk'/>
	I0729 17:50:02.430832  105708 main.go:141] libmachine: (ha-794405-m02)       <target dev='hda' bus='virtio'/>
	I0729 17:50:02.430842  105708 main.go:141] libmachine: (ha-794405-m02)     </disk>
	I0729 17:50:02.430853  105708 main.go:141] libmachine: (ha-794405-m02)     <interface type='network'>
	I0729 17:50:02.430865  105708 main.go:141] libmachine: (ha-794405-m02)       <source network='mk-ha-794405'/>
	I0729 17:50:02.430876  105708 main.go:141] libmachine: (ha-794405-m02)       <model type='virtio'/>
	I0729 17:50:02.430887  105708 main.go:141] libmachine: (ha-794405-m02)     </interface>
	I0729 17:50:02.430897  105708 main.go:141] libmachine: (ha-794405-m02)     <interface type='network'>
	I0729 17:50:02.430910  105708 main.go:141] libmachine: (ha-794405-m02)       <source network='default'/>
	I0729 17:50:02.430921  105708 main.go:141] libmachine: (ha-794405-m02)       <model type='virtio'/>
	I0729 17:50:02.430932  105708 main.go:141] libmachine: (ha-794405-m02)     </interface>
	I0729 17:50:02.430943  105708 main.go:141] libmachine: (ha-794405-m02)     <serial type='pty'>
	I0729 17:50:02.430954  105708 main.go:141] libmachine: (ha-794405-m02)       <target port='0'/>
	I0729 17:50:02.430963  105708 main.go:141] libmachine: (ha-794405-m02)     </serial>
	I0729 17:50:02.430975  105708 main.go:141] libmachine: (ha-794405-m02)     <console type='pty'>
	I0729 17:50:02.430986  105708 main.go:141] libmachine: (ha-794405-m02)       <target type='serial' port='0'/>
	I0729 17:50:02.430998  105708 main.go:141] libmachine: (ha-794405-m02)     </console>
	I0729 17:50:02.431009  105708 main.go:141] libmachine: (ha-794405-m02)     <rng model='virtio'>
	I0729 17:50:02.431021  105708 main.go:141] libmachine: (ha-794405-m02)       <backend model='random'>/dev/random</backend>
	I0729 17:50:02.431031  105708 main.go:141] libmachine: (ha-794405-m02)     </rng>
	I0729 17:50:02.431042  105708 main.go:141] libmachine: (ha-794405-m02)     
	I0729 17:50:02.431051  105708 main.go:141] libmachine: (ha-794405-m02)     
	I0729 17:50:02.431060  105708 main.go:141] libmachine: (ha-794405-m02)   </devices>
	I0729 17:50:02.431069  105708 main.go:141] libmachine: (ha-794405-m02) </domain>
	I0729 17:50:02.431082  105708 main.go:141] libmachine: (ha-794405-m02) 
	I0729 17:50:02.438140  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:8c:42:bd in network default
	I0729 17:50:02.438673  105708 main.go:141] libmachine: (ha-794405-m02) Ensuring networks are active...
	I0729 17:50:02.438694  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:02.439393  105708 main.go:141] libmachine: (ha-794405-m02) Ensuring network default is active
	I0729 17:50:02.439724  105708 main.go:141] libmachine: (ha-794405-m02) Ensuring network mk-ha-794405 is active
	I0729 17:50:02.440088  105708 main.go:141] libmachine: (ha-794405-m02) Getting domain xml...
	I0729 17:50:02.440815  105708 main.go:141] libmachine: (ha-794405-m02) Creating domain...
	I0729 17:50:02.797267  105708 main.go:141] libmachine: (ha-794405-m02) Waiting to get IP...
	I0729 17:50:02.798142  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:02.798581  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:02.798610  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:02.798550  106097 retry.go:31] will retry after 292.596043ms: waiting for machine to come up
	I0729 17:50:03.093110  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:03.093578  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:03.093626  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:03.093525  106097 retry.go:31] will retry after 249.181248ms: waiting for machine to come up
	I0729 17:50:03.343933  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:03.344384  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:03.344415  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:03.344334  106097 retry.go:31] will retry after 435.80599ms: waiting for machine to come up
	I0729 17:50:03.781921  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:03.782363  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:03.782390  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:03.782318  106097 retry.go:31] will retry after 521.033043ms: waiting for machine to come up
	I0729 17:50:04.305096  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:04.305521  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:04.305587  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:04.305510  106097 retry.go:31] will retry after 689.093873ms: waiting for machine to come up
	I0729 17:50:04.996280  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:04.996755  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:04.996780  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:04.996706  106097 retry.go:31] will retry after 952.96779ms: waiting for machine to come up
	I0729 17:50:05.950893  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:05.951247  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:05.951276  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:05.951214  106097 retry.go:31] will retry after 747.920675ms: waiting for machine to come up
	I0729 17:50:06.701350  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:06.701685  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:06.701716  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:06.701666  106097 retry.go:31] will retry after 1.243871709s: waiting for machine to come up
	I0729 17:50:07.946750  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:07.947219  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:07.947250  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:07.947160  106097 retry.go:31] will retry after 1.671917885s: waiting for machine to come up
	I0729 17:50:09.620903  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:09.621411  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:09.621444  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:09.621353  106097 retry.go:31] will retry after 2.136646754s: waiting for machine to come up
	I0729 17:50:11.760209  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:11.760703  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:11.760732  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:11.760630  106097 retry.go:31] will retry after 1.864944726s: waiting for machine to come up
	I0729 17:50:13.628039  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:13.628439  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:13.628461  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:13.628402  106097 retry.go:31] will retry after 3.226289483s: waiting for machine to come up
	I0729 17:50:16.858269  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:16.858719  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:16.858750  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:16.858653  106097 retry.go:31] will retry after 3.139463175s: waiting for machine to come up
	I0729 17:50:20.002174  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:20.002520  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find current IP address of domain ha-794405-m02 in network mk-ha-794405
	I0729 17:50:20.002552  105708 main.go:141] libmachine: (ha-794405-m02) DBG | I0729 17:50:20.002473  106097 retry.go:31] will retry after 3.930462308s: waiting for machine to come up
	I0729 17:50:23.934909  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:23.935367  105708 main.go:141] libmachine: (ha-794405-m02) Found IP for machine: 192.168.39.62
	I0729 17:50:23.935398  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has current primary IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:23.935408  105708 main.go:141] libmachine: (ha-794405-m02) Reserving static IP address...
	I0729 17:50:23.935720  105708 main.go:141] libmachine: (ha-794405-m02) DBG | unable to find host DHCP lease matching {name: "ha-794405-m02", mac: "52:54:00:1a:4a:02", ip: "192.168.39.62"} in network mk-ha-794405
	I0729 17:50:24.008414  105708 main.go:141] libmachine: (ha-794405-m02) Reserved static IP address: 192.168.39.62
	I0729 17:50:24.008449  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Getting to WaitForSSH function...
	I0729 17:50:24.008458  105708 main.go:141] libmachine: (ha-794405-m02) Waiting for SSH to be available...
	I0729 17:50:24.010923  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.011287  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.011316  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.011448  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Using SSH client type: external
	I0729 17:50:24.011475  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa (-rw-------)
	I0729 17:50:24.011514  105708 main.go:141] libmachine: (ha-794405-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:50:24.011537  105708 main.go:141] libmachine: (ha-794405-m02) DBG | About to run SSH command:
	I0729 17:50:24.011554  105708 main.go:141] libmachine: (ha-794405-m02) DBG | exit 0
	I0729 17:50:24.136970  105708 main.go:141] libmachine: (ha-794405-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 17:50:24.137257  105708 main.go:141] libmachine: (ha-794405-m02) KVM machine creation complete!
	I0729 17:50:24.137629  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetConfigRaw
	I0729 17:50:24.138203  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:24.138427  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:24.138606  105708 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:50:24.138620  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 17:50:24.139891  105708 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:50:24.139913  105708 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:50:24.139930  105708 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:50:24.139937  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.142295  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.142636  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.142667  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.142783  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.142977  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.143144  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.143296  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.143499  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:24.143710  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:24.143722  105708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:50:24.248119  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:50:24.248143  105708 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:50:24.248152  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.250902  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.251267  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.251296  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.251429  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.251630  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.251763  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.251872  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.252028  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:24.252186  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:24.252197  105708 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:50:24.353332  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:50:24.353405  105708 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:50:24.353419  105708 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:50:24.353430  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetMachineName
	I0729 17:50:24.353674  105708 buildroot.go:166] provisioning hostname "ha-794405-m02"
	I0729 17:50:24.353708  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetMachineName
	I0729 17:50:24.353880  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.356482  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.356845  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.356894  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.357069  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.357246  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.357419  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.357576  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.357739  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:24.357902  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:24.357914  105708 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-794405-m02 && echo "ha-794405-m02" | sudo tee /etc/hostname
	I0729 17:50:24.475521  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-794405-m02
	
	I0729 17:50:24.475553  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.478081  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.478428  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.478453  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.478623  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.478799  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.478962  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.479099  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.479294  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:24.479463  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:24.479479  105708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-794405-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-794405-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-794405-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:50:24.589289  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:50:24.589319  105708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 17:50:24.589338  105708 buildroot.go:174] setting up certificates
	I0729 17:50:24.589348  105708 provision.go:84] configureAuth start
	I0729 17:50:24.589359  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetMachineName
	I0729 17:50:24.589626  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:50:24.592085  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.592383  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.592410  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.592488  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.594455  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.594773  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.594811  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.594914  105708 provision.go:143] copyHostCerts
	I0729 17:50:24.594962  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:50:24.594999  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 17:50:24.595011  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:50:24.595087  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 17:50:24.595174  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:50:24.595198  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 17:50:24.595207  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:50:24.595239  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 17:50:24.595301  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:50:24.595321  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 17:50:24.595330  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:50:24.595364  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 17:50:24.595429  105708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.ha-794405-m02 san=[127.0.0.1 192.168.39.62 ha-794405-m02 localhost minikube]
	I0729 17:50:24.689531  105708 provision.go:177] copyRemoteCerts
	I0729 17:50:24.689589  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:50:24.689613  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.691979  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.692254  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.692282  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.692399  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.692567  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.692703  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.692821  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	I0729 17:50:24.775583  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:50:24.775674  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:50:24.800666  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:50:24.800749  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 17:50:24.824627  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:50:24.824693  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:50:24.851265  105708 provision.go:87] duration metric: took 261.904202ms to configureAuth
	I0729 17:50:24.851288  105708 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:50:24.851485  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:24.851574  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:24.854353  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.854751  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:24.854774  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:24.854972  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:24.855187  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.855369  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:24.855527  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:24.855729  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:24.855895  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:24.855909  105708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:50:25.115172  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:50:25.115202  105708 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:50:25.115212  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetURL
	I0729 17:50:25.116573  105708 main.go:141] libmachine: (ha-794405-m02) DBG | Using libvirt version 6000000
	I0729 17:50:25.118668  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.118991  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.119024  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.119225  105708 main.go:141] libmachine: Docker is up and running!
	I0729 17:50:25.119244  105708 main.go:141] libmachine: Reticulating splines...
	I0729 17:50:25.119252  105708 client.go:171] duration metric: took 23.173687306s to LocalClient.Create
	I0729 17:50:25.119275  105708 start.go:167] duration metric: took 23.173752916s to libmachine.API.Create "ha-794405"
	I0729 17:50:25.119285  105708 start.go:293] postStartSetup for "ha-794405-m02" (driver="kvm2")
	I0729 17:50:25.119295  105708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:50:25.119310  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:25.119560  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:50:25.119584  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:25.121881  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.122217  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.122249  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.122363  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:25.122553  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:25.122712  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:25.122844  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	I0729 17:50:25.202815  105708 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:50:25.207271  105708 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:50:25.207291  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 17:50:25.207351  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 17:50:25.207424  105708 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 17:50:25.207435  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /etc/ssl/certs/952822.pem
	I0729 17:50:25.207509  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:50:25.216379  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:50:25.239471  105708 start.go:296] duration metric: took 120.173209ms for postStartSetup
	I0729 17:50:25.239520  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetConfigRaw
	I0729 17:50:25.240058  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:50:25.242548  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.243044  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.243075  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.243323  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:50:25.243501  105708 start.go:128] duration metric: took 23.31679432s to createHost
	I0729 17:50:25.243523  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:25.245677  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.245977  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.246005  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.246151  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:25.246321  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:25.246430  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:25.246510  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:25.246631  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:50:25.246875  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0729 17:50:25.246889  105708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:50:25.349435  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275425.326707607
	
	I0729 17:50:25.349459  105708 fix.go:216] guest clock: 1722275425.326707607
	I0729 17:50:25.349468  105708 fix.go:229] Guest: 2024-07-29 17:50:25.326707607 +0000 UTC Remote: 2024-07-29 17:50:25.243512506 +0000 UTC m=+82.453073606 (delta=83.195101ms)
	I0729 17:50:25.349492  105708 fix.go:200] guest clock delta is within tolerance: 83.195101ms
	I0729 17:50:25.349499  105708 start.go:83] releasing machines lock for "ha-794405-m02", held for 23.422883421s
	I0729 17:50:25.349518  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:25.349804  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:50:25.352168  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.352505  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.352539  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.354836  105708 out.go:177] * Found network options:
	I0729 17:50:25.356053  105708 out.go:177]   - NO_PROXY=192.168.39.102
	W0729 17:50:25.357226  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:50:25.357252  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:25.357733  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:25.357902  105708 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 17:50:25.357962  105708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:50:25.358006  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	W0729 17:50:25.358096  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:50:25.358156  105708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:50:25.358171  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 17:50:25.360594  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.360887  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.360935  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.360956  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.361069  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:25.361218  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:25.361285  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:25.361314  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:25.361374  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:25.361481  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 17:50:25.361551  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	I0729 17:50:25.361623  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 17:50:25.361793  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 17:50:25.361944  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	I0729 17:50:25.592375  105708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:50:25.598665  105708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:50:25.598719  105708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:50:25.615605  105708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:50:25.615623  105708 start.go:495] detecting cgroup driver to use...
	I0729 17:50:25.615677  105708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:50:25.632375  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:50:25.645620  105708 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:50:25.645660  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:50:25.659561  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:50:25.675559  105708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:50:25.786519  105708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:50:25.949904  105708 docker.go:233] disabling docker service ...
	I0729 17:50:25.949987  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:50:25.964662  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:50:25.977981  105708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:50:26.112688  105708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:50:26.246776  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:50:26.261490  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:50:26.280323  105708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:50:26.280405  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.291243  105708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:50:26.291317  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.301961  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.312821  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.324499  105708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:50:26.336637  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.348224  105708 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.365485  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:50:26.375878  105708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:50:26.385363  105708 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:50:26.385418  105708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:50:26.400237  105708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:50:26.410288  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:50:26.531664  105708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:50:26.667501  105708 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:50:26.667594  105708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:50:26.672733  105708 start.go:563] Will wait 60s for crictl version
	I0729 17:50:26.672799  105708 ssh_runner.go:195] Run: which crictl
	I0729 17:50:26.676328  105708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:50:26.718978  105708 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:50:26.719077  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:50:26.747155  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:50:26.777360  105708 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:50:26.778668  105708 out.go:177]   - env NO_PROXY=192.168.39.102
	I0729 17:50:26.779784  105708 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 17:50:26.782353  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:26.782734  105708 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:50:15 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 17:50:26.782769  105708 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 17:50:26.782943  105708 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:50:26.786976  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:50:26.799733  105708 mustload.go:65] Loading cluster: ha-794405
	I0729 17:50:26.799968  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:26.800252  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:26.800281  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:26.814821  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0729 17:50:26.815291  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:26.815811  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:26.815836  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:26.816141  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:26.816326  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:50:26.817950  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:50:26.818339  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:26.818389  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:26.833845  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0729 17:50:26.834398  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:26.835057  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:26.835083  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:26.835437  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:26.835641  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:50:26.835796  105708 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405 for IP: 192.168.39.62
	I0729 17:50:26.835819  105708 certs.go:194] generating shared ca certs ...
	I0729 17:50:26.835834  105708 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:50:26.835958  105708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 17:50:26.835996  105708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 17:50:26.836006  105708 certs.go:256] generating profile certs ...
	I0729 17:50:26.836077  105708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key
	I0729 17:50:26.836100  105708 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.838ee660
	I0729 17:50:26.836114  105708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.838ee660 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.62 192.168.39.254]
	I0729 17:50:26.888048  105708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.838ee660 ...
	I0729 17:50:26.888075  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.838ee660: {Name:mkfc61a8a666685e5f20b7ed9465d09419315008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:50:26.888258  105708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.838ee660 ...
	I0729 17:50:26.888276  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.838ee660: {Name:mke59070840099e39d97d4ecf9944713af9aa4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:50:26.888368  105708 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.838ee660 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt
	I0729 17:50:26.888534  105708 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.838ee660 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key
	I0729 17:50:26.888706  105708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key
	I0729 17:50:26.888727  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:50:26.888745  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:50:26.888764  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:50:26.888783  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:50:26.888800  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:50:26.888820  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:50:26.888838  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:50:26.888876  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:50:26.888986  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 17:50:26.889035  105708 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 17:50:26.889048  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:50:26.889080  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:50:26.889112  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:50:26.889143  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 17:50:26.889217  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:50:26.889271  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:50:26.889291  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem -> /usr/share/ca-certificates/95282.pem
	I0729 17:50:26.889308  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /usr/share/ca-certificates/952822.pem
	I0729 17:50:26.889348  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:50:26.892493  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:26.892951  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:50:26.892980  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:26.893125  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:50:26.893315  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:50:26.893443  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:50:26.893573  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:50:26.965153  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 17:50:26.969848  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 17:50:26.982564  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 17:50:26.986579  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 17:50:26.996948  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 17:50:27.001030  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 17:50:27.010925  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 17:50:27.014937  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 17:50:27.029179  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 17:50:27.034973  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 17:50:27.046033  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 17:50:27.050116  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 17:50:27.060140  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:50:27.086899  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:50:27.111873  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:50:27.136912  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:50:27.161297  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 17:50:27.183852  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 17:50:27.205820  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:50:27.227387  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:50:27.252648  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:50:27.276299  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 17:50:27.298286  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 17:50:27.320740  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 17:50:27.336410  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 17:50:27.351811  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 17:50:27.367302  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 17:50:27.382567  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 17:50:27.398845  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 17:50:27.414879  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 17:50:27.431677  105708 ssh_runner.go:195] Run: openssl version
	I0729 17:50:27.437000  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:50:27.447184  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:50:27.451433  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:50:27.451488  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:50:27.457153  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:50:27.467476  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 17:50:27.477555  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 17:50:27.481669  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 17:50:27.481714  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 17:50:27.487385  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 17:50:27.498987  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 17:50:27.510739  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 17:50:27.515250  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 17:50:27.515318  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 17:50:27.520988  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:50:27.532716  105708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:50:27.536746  105708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:50:27.536801  105708 kubeadm.go:934] updating node {m02 192.168.39.62 8443 v1.30.3 crio true true} ...
	I0729 17:50:27.536934  105708 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-794405-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:50:27.536968  105708 kube-vip.go:115] generating kube-vip config ...
	I0729 17:50:27.537001  105708 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:50:27.554348  105708 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:50:27.554410  105708 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:50:27.554455  105708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:50:27.565114  105708 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 17:50:27.565165  105708 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 17:50:27.575660  105708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 17:50:27.575672  105708 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 17:50:27.575686  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:50:27.575694  105708 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 17:50:27.575760  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:50:27.580013  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 17:50:27.580046  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 17:50:28.382415  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:50:28.382508  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:50:28.388642  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 17:50:28.388679  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 17:50:28.499340  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:50:28.536388  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:50:28.536512  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:50:28.550613  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 17:50:28.550655  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 17:50:29.021828  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 17:50:29.031476  105708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 17:50:29.048427  105708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:50:29.063977  105708 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 17:50:29.080284  105708 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:50:29.084172  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:50:29.095268  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:50:29.215227  105708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:50:29.232465  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:50:29.233009  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:50:29.233069  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:50:29.248039  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45755
	I0729 17:50:29.248592  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:50:29.249043  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:50:29.249066  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:50:29.249395  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:50:29.249590  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:50:29.249744  105708 start.go:317] joinCluster: &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:50:29.249846  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 17:50:29.249870  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:50:29.252743  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:29.253163  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:50:29.253193  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:50:29.253322  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:50:29.253494  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:50:29.253657  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:50:29.253799  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:50:29.421845  105708 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:50:29.421894  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ur0h6k.06ti7dkwwdnzm66h --discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-794405-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443"
	I0729 17:50:52.245826  105708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ur0h6k.06ti7dkwwdnzm66h --discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-794405-m02 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443": (22.823897398s)
	I0729 17:50:52.245871  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 17:50:52.752541  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-794405-m02 minikube.k8s.io/updated_at=2024_07_29T17_50_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=ha-794405 minikube.k8s.io/primary=false
	I0729 17:50:52.913172  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-794405-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 17:50:53.028489  105708 start.go:319] duration metric: took 23.778741939s to joinCluster
	I0729 17:50:53.028577  105708 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:50:53.028882  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:50:53.032153  105708 out.go:177] * Verifying Kubernetes components...
	I0729 17:50:53.033313  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:50:53.303312  105708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:50:53.357367  105708 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:50:53.357719  105708 kapi.go:59] client config for ha-794405: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt", KeyFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key", CAFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 17:50:53.357803  105708 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0729 17:50:53.358125  105708 node_ready.go:35] waiting up to 6m0s for node "ha-794405-m02" to be "Ready" ...
	I0729 17:50:53.358245  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:53.358256  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:53.358267  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:53.358276  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:53.372063  105708 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 17:50:53.859331  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:53.859352  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:53.859360  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:53.859365  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:53.867596  105708 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 17:50:54.358917  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:54.358941  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:54.358950  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:54.358952  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:54.363593  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:50:54.859319  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:54.859341  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:54.859348  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:54.859352  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:54.864698  105708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:50:55.359044  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:55.359071  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:55.359084  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:55.359090  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:55.364653  105708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:50:55.365354  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:50:55.859282  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:55.859302  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:55.859311  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:55.859315  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:55.863618  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:50:56.358964  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:56.358994  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:56.359005  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:56.359012  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:56.362213  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:50:56.858760  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:56.858779  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:56.858787  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:56.858791  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:56.861362  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:50:57.358925  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:57.358946  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:57.358955  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:57.358959  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:57.361698  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:50:57.858500  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:57.858522  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:57.858530  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:57.858538  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:57.863169  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:50:57.863967  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:50:58.358645  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:58.358667  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:58.358675  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:58.358679  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:58.361278  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:50:58.859319  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:58.859341  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:58.859349  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:58.859354  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:58.862367  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:50:59.358923  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:59.358952  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:59.358964  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:59.358970  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:59.368443  105708 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0729 17:50:59.858445  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:50:59.858475  105708 round_trippers.go:469] Request Headers:
	I0729 17:50:59.858487  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:50:59.858493  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:50:59.861504  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:00.358665  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:00.358687  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:00.358695  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:00.358698  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:00.361342  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:00.361796  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:51:00.859287  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:00.859310  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:00.859317  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:00.859320  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:00.862958  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:01.358364  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:01.358386  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:01.358394  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:01.358399  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:01.361432  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:01.859047  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:01.859074  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:01.859086  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:01.859094  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:01.862035  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:02.358546  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:02.358569  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:02.358577  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:02.358581  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:02.361609  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:02.362133  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:51:02.858906  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:02.858927  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:02.858940  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:02.858944  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:02.862407  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:03.359138  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:03.359164  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:03.359173  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:03.359178  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:03.361986  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:03.859104  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:03.859127  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:03.859136  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:03.859139  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:03.862426  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:04.359063  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:04.359087  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:04.359095  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:04.359099  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:04.362334  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:04.362999  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:51:04.859366  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:04.859388  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:04.859397  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:04.859400  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:04.862279  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:05.358324  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:05.358345  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:05.358353  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:05.358358  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:05.361284  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:05.859219  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:05.859242  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:05.859250  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:05.859254  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:05.861973  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:06.359344  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:06.359367  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:06.359375  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:06.359378  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:06.362493  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:06.363234  105708 node_ready.go:53] node "ha-794405-m02" has status "Ready":"False"
	I0729 17:51:06.858498  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:06.858521  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:06.858530  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:06.858534  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:06.861598  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:07.358370  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:07.358396  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.358409  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.358414  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.361864  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:07.362602  105708 node_ready.go:49] node "ha-794405-m02" has status "Ready":"True"
	I0729 17:51:07.362623  105708 node_ready.go:38] duration metric: took 14.004476488s for node "ha-794405-m02" to be "Ready" ...
	I0729 17:51:07.362631  105708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:51:07.362705  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:51:07.362718  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.362728  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.362737  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.368261  105708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 17:51:07.375052  105708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.375139  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bb2jg
	I0729 17:51:07.375151  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.375162  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.375168  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.378064  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:07.378817  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:07.378839  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.378849  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.378858  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.381407  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:07.383608  105708 pod_ready.go:92] pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:07.383625  105708 pod_ready.go:81] duration metric: took 8.550201ms for pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.383634  105708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.383683  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nzvff
	I0729 17:51:07.383690  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.383696  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.383704  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.388595  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:51:07.389483  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:07.389498  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.389507  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.389511  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.391907  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:07.392441  105708 pod_ready.go:92] pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:07.392456  105708 pod_ready.go:81] duration metric: took 8.810378ms for pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.392466  105708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.392507  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405
	I0729 17:51:07.392515  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.392521  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.392525  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.394731  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:07.395273  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:07.395295  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.395308  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.395314  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.397197  105708 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 17:51:07.397787  105708 pod_ready.go:92] pod "etcd-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:07.397814  105708 pod_ready.go:81] duration metric: took 5.34175ms for pod "etcd-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.397826  105708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:07.397886  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:07.397896  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.397905  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.397913  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.400537  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:07.401088  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:07.401101  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.401108  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.401115  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.402916  105708 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 17:51:07.899036  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:07.899066  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.899078  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.899084  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.909049  105708 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0729 17:51:07.909963  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:07.909980  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:07.909988  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:07.909992  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:07.912758  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:08.398045  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:08.398072  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:08.398079  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:08.398085  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:08.401591  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:08.402289  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:08.402305  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:08.402312  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:08.402316  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:08.405255  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:08.898105  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:08.898124  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:08.898133  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:08.898137  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:08.901450  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:08.902305  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:08.902322  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:08.902332  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:08.902338  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:08.904769  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:09.398045  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:09.398067  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:09.398075  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:09.398080  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:09.401424  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:09.402429  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:09.402446  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:09.402452  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:09.402456  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:09.405058  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:09.405629  105708 pod_ready.go:102] pod "etcd-ha-794405-m02" in "kube-system" namespace has status "Ready":"False"
	I0729 17:51:09.898067  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:09.898091  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:09.898099  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:09.898103  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:09.901384  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:09.902045  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:09.902061  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:09.902068  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:09.902073  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:09.904490  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.398417  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:51:10.398442  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.398450  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.398455  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.401415  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.401997  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:10.402012  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.402019  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.402022  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.404835  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.405333  105708 pod_ready.go:92] pod "etcd-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:10.405350  105708 pod_ready.go:81] duration metric: took 3.007512748s for pod "etcd-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.405367  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.405423  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405
	I0729 17:51:10.405432  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.405442  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.405447  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.407827  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.408297  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:10.408311  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.408320  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.408325  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.410380  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.410835  105708 pod_ready.go:92] pod "kube-apiserver-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:10.410849  105708 pod_ready.go:81] duration metric: took 5.474903ms for pod "kube-apiserver-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.410857  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.410904  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405-m02
	I0729 17:51:10.410912  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.410918  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.410921  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.413082  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.559008  105708 request.go:629] Waited for 145.311469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:10.559063  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:10.559068  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.559075  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.559078  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.561945  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:10.562429  105708 pod_ready.go:92] pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:10.562446  105708 pod_ready.go:81] duration metric: took 151.584306ms for pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.562456  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.758890  105708 request.go:629] Waited for 196.352271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405
	I0729 17:51:10.758959  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405
	I0729 17:51:10.758966  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.758977  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.758984  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.762199  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:10.959007  105708 request.go:629] Waited for 196.057263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:10.959072  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:10.959080  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:10.959089  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:10.959096  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:10.962208  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:10.962916  105708 pod_ready.go:92] pod "kube-controller-manager-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:10.962937  105708 pod_ready.go:81] duration metric: took 400.475478ms for pod "kube-controller-manager-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:10.962948  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:11.159321  105708 request.go:629] Waited for 196.305681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m02
	I0729 17:51:11.159396  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m02
	I0729 17:51:11.159401  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:11.159409  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:11.159414  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:11.162769  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:11.358949  105708 request.go:629] Waited for 195.417223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:11.359029  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:11.359034  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:11.359041  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:11.359046  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:11.362003  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:11.362642  105708 pod_ready.go:92] pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:11.362661  105708 pod_ready.go:81] duration metric: took 399.706913ms for pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:11.362676  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-llkz8" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:11.558798  105708 request.go:629] Waited for 196.045783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llkz8
	I0729 17:51:11.558883  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llkz8
	I0729 17:51:11.558890  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:11.558901  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:11.558910  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:11.562626  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:11.758813  105708 request.go:629] Waited for 195.361111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:11.758876  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:11.758881  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:11.758889  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:11.758895  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:11.761854  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:11.762584  105708 pod_ready.go:92] pod "kube-proxy-llkz8" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:11.762611  105708 pod_ready.go:81] duration metric: took 399.920602ms for pod "kube-proxy-llkz8" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:11.762620  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qcmxl" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:11.958999  105708 request.go:629] Waited for 196.309399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qcmxl
	I0729 17:51:11.959070  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qcmxl
	I0729 17:51:11.959080  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:11.959091  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:11.959101  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:11.962553  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:12.158622  105708 request.go:629] Waited for 195.277383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:12.158686  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:12.158692  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.158701  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.158706  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.161758  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:12.162313  105708 pod_ready.go:92] pod "kube-proxy-qcmxl" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:12.162331  105708 pod_ready.go:81] duration metric: took 399.705375ms for pod "kube-proxy-qcmxl" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:12.162343  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:12.358408  105708 request.go:629] Waited for 195.986243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405
	I0729 17:51:12.358505  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405
	I0729 17:51:12.358518  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.358528  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.358533  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.361719  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:12.558594  105708 request.go:629] Waited for 196.298605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:12.558662  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:51:12.558668  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.558675  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.558679  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.561636  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:12.562230  105708 pod_ready.go:92] pod "kube-scheduler-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:12.562250  105708 pod_ready.go:81] duration metric: took 399.901327ms for pod "kube-scheduler-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:12.562260  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:12.759320  105708 request.go:629] Waited for 196.976772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m02
	I0729 17:51:12.759381  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m02
	I0729 17:51:12.759386  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.759393  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.759397  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.762572  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:12.959118  105708 request.go:629] Waited for 195.846133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:12.959175  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:51:12.959179  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.959186  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.959191  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.962116  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:51:12.962744  105708 pod_ready.go:92] pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:51:12.962764  105708 pod_ready.go:81] duration metric: took 400.498045ms for pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:51:12.962774  105708 pod_ready.go:38] duration metric: took 5.600132075s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:51:12.962790  105708 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:51:12.962842  105708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:51:12.978299  105708 api_server.go:72] duration metric: took 19.949674148s to wait for apiserver process to appear ...
	I0729 17:51:12.978317  105708 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:51:12.978338  105708 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0729 17:51:12.982647  105708 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0729 17:51:12.982708  105708 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0729 17:51:12.982715  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:12.982723  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:12.982728  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:12.983642  105708 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 17:51:12.983761  105708 api_server.go:141] control plane version: v1.30.3
	I0729 17:51:12.983784  105708 api_server.go:131] duration metric: took 5.459255ms to wait for apiserver health ...
	I0729 17:51:12.983794  105708 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 17:51:13.159231  105708 request.go:629] Waited for 175.337331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:51:13.159291  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:51:13.159295  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:13.159303  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:13.159310  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:13.164029  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:51:13.168981  105708 system_pods.go:59] 17 kube-system pods found
	I0729 17:51:13.169011  105708 system_pods.go:61] "coredns-7db6d8ff4d-bb2jg" [ee9ad335-25b2-4e6c-a523-47b06ce713dc] Running
	I0729 17:51:13.169016  105708 system_pods.go:61] "coredns-7db6d8ff4d-nzvff" [b1e2c116-2549-4e1a-8d79-cd86595db9f3] Running
	I0729 17:51:13.169019  105708 system_pods.go:61] "etcd-ha-794405" [95b13c7f-e6bf-4225-a00b-de1fefb711ac] Running
	I0729 17:51:13.169023  105708 system_pods.go:61] "etcd-ha-794405-m02" [5c99a8df-66da-45b0-b62a-30e17113a35d] Running
	I0729 17:51:13.169027  105708 system_pods.go:61] "kindnet-8qgq5" [0b2b707a-4283-41cd-9f3b-83b6f2d169cf] Running
	I0729 17:51:13.169031  105708 system_pods.go:61] "kindnet-j4l89" [c0b81d74-531b-4878-84ea-654e7b57f0ba] Running
	I0729 17:51:13.169036  105708 system_pods.go:61] "kube-apiserver-ha-794405" [32e07436-25ae-4f51-b0e6-004ed954d8b7] Running
	I0729 17:51:13.169041  105708 system_pods.go:61] "kube-apiserver-ha-794405-m02" [0e7afc5b-62cf-4874-ade9-1df7a6c3926d] Running
	I0729 17:51:13.169046  105708 system_pods.go:61] "kube-controller-manager-ha-794405" [c2d30a4e-d2bc-4221-b8d9-98b67932e4ab] Running
	I0729 17:51:13.169051  105708 system_pods.go:61] "kube-controller-manager-ha-794405-m02" [88e040cd-4d7f-4a2e-97cb-4f24249d1c82] Running
	I0729 17:51:13.169058  105708 system_pods.go:61] "kube-proxy-llkz8" [95536eff-3f12-4a7e-9504-c8f6b1acc4cb] Running
	I0729 17:51:13.169062  105708 system_pods.go:61] "kube-proxy-qcmxl" [963fc8f8-3080-4602-9437-d2060c7ea622] Running
	I0729 17:51:13.169068  105708 system_pods.go:61] "kube-scheduler-ha-794405" [b41eb8ec-bf30-4cc6-b454-28305ccf70b5] Running
	I0729 17:51:13.169073  105708 system_pods.go:61] "kube-scheduler-ha-794405-m02" [8a0bfe0d-b80a-4799-8371-84041938bf1d] Running
	I0729 17:51:13.169081  105708 system_pods.go:61] "kube-vip-ha-794405" [0e782ab8-0d52-4894-b003-493294ab4710] Running
	I0729 17:51:13.169086  105708 system_pods.go:61] "kube-vip-ha-794405-m02" [f7e2f40a-28ab-4655-a82c-11df8cf806d5] Running
	I0729 17:51:13.169092  105708 system_pods.go:61] "storage-provisioner" [0e08d093-f8b5-4614-9be2-5832f7cafa75] Running
	I0729 17:51:13.169098  105708 system_pods.go:74] duration metric: took 185.297964ms to wait for pod list to return data ...
	I0729 17:51:13.169108  105708 default_sa.go:34] waiting for default service account to be created ...
	I0729 17:51:13.358462  105708 request.go:629] Waited for 189.275415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:51:13.358534  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:51:13.358547  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:13.358557  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:13.358568  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:13.361778  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:51:13.362015  105708 default_sa.go:45] found service account: "default"
	I0729 17:51:13.362033  105708 default_sa.go:55] duration metric: took 192.917988ms for default service account to be created ...
	I0729 17:51:13.362042  105708 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 17:51:13.559189  105708 request.go:629] Waited for 197.080882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:51:13.559261  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:51:13.559268  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:13.559278  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:13.559288  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:13.564241  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:51:13.568460  105708 system_pods.go:86] 17 kube-system pods found
	I0729 17:51:13.568486  105708 system_pods.go:89] "coredns-7db6d8ff4d-bb2jg" [ee9ad335-25b2-4e6c-a523-47b06ce713dc] Running
	I0729 17:51:13.568491  105708 system_pods.go:89] "coredns-7db6d8ff4d-nzvff" [b1e2c116-2549-4e1a-8d79-cd86595db9f3] Running
	I0729 17:51:13.568495  105708 system_pods.go:89] "etcd-ha-794405" [95b13c7f-e6bf-4225-a00b-de1fefb711ac] Running
	I0729 17:51:13.568499  105708 system_pods.go:89] "etcd-ha-794405-m02" [5c99a8df-66da-45b0-b62a-30e17113a35d] Running
	I0729 17:51:13.568503  105708 system_pods.go:89] "kindnet-8qgq5" [0b2b707a-4283-41cd-9f3b-83b6f2d169cf] Running
	I0729 17:51:13.568507  105708 system_pods.go:89] "kindnet-j4l89" [c0b81d74-531b-4878-84ea-654e7b57f0ba] Running
	I0729 17:51:13.568511  105708 system_pods.go:89] "kube-apiserver-ha-794405" [32e07436-25ae-4f51-b0e6-004ed954d8b7] Running
	I0729 17:51:13.568515  105708 system_pods.go:89] "kube-apiserver-ha-794405-m02" [0e7afc5b-62cf-4874-ade9-1df7a6c3926d] Running
	I0729 17:51:13.568519  105708 system_pods.go:89] "kube-controller-manager-ha-794405" [c2d30a4e-d2bc-4221-b8d9-98b67932e4ab] Running
	I0729 17:51:13.568523  105708 system_pods.go:89] "kube-controller-manager-ha-794405-m02" [88e040cd-4d7f-4a2e-97cb-4f24249d1c82] Running
	I0729 17:51:13.568527  105708 system_pods.go:89] "kube-proxy-llkz8" [95536eff-3f12-4a7e-9504-c8f6b1acc4cb] Running
	I0729 17:51:13.568531  105708 system_pods.go:89] "kube-proxy-qcmxl" [963fc8f8-3080-4602-9437-d2060c7ea622] Running
	I0729 17:51:13.568534  105708 system_pods.go:89] "kube-scheduler-ha-794405" [b41eb8ec-bf30-4cc6-b454-28305ccf70b5] Running
	I0729 17:51:13.568538  105708 system_pods.go:89] "kube-scheduler-ha-794405-m02" [8a0bfe0d-b80a-4799-8371-84041938bf1d] Running
	I0729 17:51:13.568544  105708 system_pods.go:89] "kube-vip-ha-794405" [0e782ab8-0d52-4894-b003-493294ab4710] Running
	I0729 17:51:13.568550  105708 system_pods.go:89] "kube-vip-ha-794405-m02" [f7e2f40a-28ab-4655-a82c-11df8cf806d5] Running
	I0729 17:51:13.568555  105708 system_pods.go:89] "storage-provisioner" [0e08d093-f8b5-4614-9be2-5832f7cafa75] Running
	I0729 17:51:13.568561  105708 system_pods.go:126] duration metric: took 206.513897ms to wait for k8s-apps to be running ...
	I0729 17:51:13.568570  105708 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 17:51:13.568616  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:51:13.584105  105708 system_svc.go:56] duration metric: took 15.522568ms WaitForService to wait for kubelet
	I0729 17:51:13.584136  105708 kubeadm.go:582] duration metric: took 20.555513243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:51:13.584155  105708 node_conditions.go:102] verifying NodePressure condition ...
	I0729 17:51:13.758502  105708 request.go:629] Waited for 174.254052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0729 17:51:13.758577  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0729 17:51:13.758584  105708 round_trippers.go:469] Request Headers:
	I0729 17:51:13.758592  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:51:13.758599  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:51:13.763156  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:51:13.764285  105708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:51:13.764311  105708 node_conditions.go:123] node cpu capacity is 2
	I0729 17:51:13.764322  105708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:51:13.764326  105708 node_conditions.go:123] node cpu capacity is 2
	I0729 17:51:13.764331  105708 node_conditions.go:105] duration metric: took 180.172008ms to run NodePressure ...
	I0729 17:51:13.764342  105708 start.go:241] waiting for startup goroutines ...
	I0729 17:51:13.764365  105708 start.go:255] writing updated cluster config ...
	I0729 17:51:13.766333  105708 out.go:177] 
	I0729 17:51:13.767774  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:51:13.767861  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:51:13.769575  105708 out.go:177] * Starting "ha-794405-m03" control-plane node in "ha-794405" cluster
	I0729 17:51:13.770820  105708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:51:13.770842  105708 cache.go:56] Caching tarball of preloaded images
	I0729 17:51:13.770959  105708 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:51:13.770974  105708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:51:13.771093  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:51:13.771292  105708 start.go:360] acquireMachinesLock for ha-794405-m03: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:51:13.771340  105708 start.go:364] duration metric: took 27.932µs to acquireMachinesLock for "ha-794405-m03"
	I0729 17:51:13.771364  105708 start.go:93] Provisioning new machine with config: &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:51:13.771491  105708 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 17:51:13.772994  105708 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 17:51:13.773093  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:51:13.773134  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:51:13.789231  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0729 17:51:13.789690  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:51:13.790213  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:51:13.790238  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:51:13.790573  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:51:13.790738  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetMachineName
	I0729 17:51:13.790879  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:13.791028  105708 start.go:159] libmachine.API.Create for "ha-794405" (driver="kvm2")
	I0729 17:51:13.791052  105708 client.go:168] LocalClient.Create starting
	I0729 17:51:13.791076  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 17:51:13.791104  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:51:13.791118  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:51:13.791168  105708 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 17:51:13.791188  105708 main.go:141] libmachine: Decoding PEM data...
	I0729 17:51:13.791198  105708 main.go:141] libmachine: Parsing certificate...
	I0729 17:51:13.791215  105708 main.go:141] libmachine: Running pre-create checks...
	I0729 17:51:13.791222  105708 main.go:141] libmachine: (ha-794405-m03) Calling .PreCreateCheck
	I0729 17:51:13.791379  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetConfigRaw
	I0729 17:51:13.791697  105708 main.go:141] libmachine: Creating machine...
	I0729 17:51:13.791709  105708 main.go:141] libmachine: (ha-794405-m03) Calling .Create
	I0729 17:51:13.791855  105708 main.go:141] libmachine: (ha-794405-m03) Creating KVM machine...
	I0729 17:51:13.793425  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found existing default KVM network
	I0729 17:51:13.793547  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found existing private KVM network mk-ha-794405
	I0729 17:51:13.793721  105708 main.go:141] libmachine: (ha-794405-m03) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03 ...
	I0729 17:51:13.793749  105708 main.go:141] libmachine: (ha-794405-m03) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 17:51:13.793799  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:13.793686  106467 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:51:13.793884  105708 main.go:141] libmachine: (ha-794405-m03) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 17:51:14.056774  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:14.056635  106467 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa...
	I0729 17:51:14.310893  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:14.310745  106467 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/ha-794405-m03.rawdisk...
	I0729 17:51:14.310929  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Writing magic tar header
	I0729 17:51:14.310951  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Writing SSH key tar header
	I0729 17:51:14.310963  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:14.310866  106467 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03 ...
	I0729 17:51:14.310978  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03 (perms=drwx------)
	I0729 17:51:14.310997  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 17:51:14.311005  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 17:51:14.311021  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 17:51:14.311033  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 17:51:14.311047  105708 main.go:141] libmachine: (ha-794405-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 17:51:14.311061  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03
	I0729 17:51:14.311078  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 17:51:14.311090  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:51:14.311099  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 17:51:14.311104  105708 main.go:141] libmachine: (ha-794405-m03) Creating domain...
	I0729 17:51:14.311137  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 17:51:14.311162  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 17:51:14.311175  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Checking permissions on dir: /home
	I0729 17:51:14.311187  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Skipping /home - not owner
	I0729 17:51:14.312094  105708 main.go:141] libmachine: (ha-794405-m03) define libvirt domain using xml: 
	I0729 17:51:14.312115  105708 main.go:141] libmachine: (ha-794405-m03) <domain type='kvm'>
	I0729 17:51:14.312125  105708 main.go:141] libmachine: (ha-794405-m03)   <name>ha-794405-m03</name>
	I0729 17:51:14.312137  105708 main.go:141] libmachine: (ha-794405-m03)   <memory unit='MiB'>2200</memory>
	I0729 17:51:14.312148  105708 main.go:141] libmachine: (ha-794405-m03)   <vcpu>2</vcpu>
	I0729 17:51:14.312155  105708 main.go:141] libmachine: (ha-794405-m03)   <features>
	I0729 17:51:14.312162  105708 main.go:141] libmachine: (ha-794405-m03)     <acpi/>
	I0729 17:51:14.312167  105708 main.go:141] libmachine: (ha-794405-m03)     <apic/>
	I0729 17:51:14.312175  105708 main.go:141] libmachine: (ha-794405-m03)     <pae/>
	I0729 17:51:14.312185  105708 main.go:141] libmachine: (ha-794405-m03)     
	I0729 17:51:14.312192  105708 main.go:141] libmachine: (ha-794405-m03)   </features>
	I0729 17:51:14.312203  105708 main.go:141] libmachine: (ha-794405-m03)   <cpu mode='host-passthrough'>
	I0729 17:51:14.312226  105708 main.go:141] libmachine: (ha-794405-m03)   
	I0729 17:51:14.312242  105708 main.go:141] libmachine: (ha-794405-m03)   </cpu>
	I0729 17:51:14.312249  105708 main.go:141] libmachine: (ha-794405-m03)   <os>
	I0729 17:51:14.312261  105708 main.go:141] libmachine: (ha-794405-m03)     <type>hvm</type>
	I0729 17:51:14.312274  105708 main.go:141] libmachine: (ha-794405-m03)     <boot dev='cdrom'/>
	I0729 17:51:14.312283  105708 main.go:141] libmachine: (ha-794405-m03)     <boot dev='hd'/>
	I0729 17:51:14.312290  105708 main.go:141] libmachine: (ha-794405-m03)     <bootmenu enable='no'/>
	I0729 17:51:14.312296  105708 main.go:141] libmachine: (ha-794405-m03)   </os>
	I0729 17:51:14.312302  105708 main.go:141] libmachine: (ha-794405-m03)   <devices>
	I0729 17:51:14.312313  105708 main.go:141] libmachine: (ha-794405-m03)     <disk type='file' device='cdrom'>
	I0729 17:51:14.312325  105708 main.go:141] libmachine: (ha-794405-m03)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/boot2docker.iso'/>
	I0729 17:51:14.312335  105708 main.go:141] libmachine: (ha-794405-m03)       <target dev='hdc' bus='scsi'/>
	I0729 17:51:14.312347  105708 main.go:141] libmachine: (ha-794405-m03)       <readonly/>
	I0729 17:51:14.312355  105708 main.go:141] libmachine: (ha-794405-m03)     </disk>
	I0729 17:51:14.312365  105708 main.go:141] libmachine: (ha-794405-m03)     <disk type='file' device='disk'>
	I0729 17:51:14.312384  105708 main.go:141] libmachine: (ha-794405-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 17:51:14.312397  105708 main.go:141] libmachine: (ha-794405-m03)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/ha-794405-m03.rawdisk'/>
	I0729 17:51:14.312404  105708 main.go:141] libmachine: (ha-794405-m03)       <target dev='hda' bus='virtio'/>
	I0729 17:51:14.312410  105708 main.go:141] libmachine: (ha-794405-m03)     </disk>
	I0729 17:51:14.312417  105708 main.go:141] libmachine: (ha-794405-m03)     <interface type='network'>
	I0729 17:51:14.312423  105708 main.go:141] libmachine: (ha-794405-m03)       <source network='mk-ha-794405'/>
	I0729 17:51:14.312429  105708 main.go:141] libmachine: (ha-794405-m03)       <model type='virtio'/>
	I0729 17:51:14.312458  105708 main.go:141] libmachine: (ha-794405-m03)     </interface>
	I0729 17:51:14.312479  105708 main.go:141] libmachine: (ha-794405-m03)     <interface type='network'>
	I0729 17:51:14.312488  105708 main.go:141] libmachine: (ha-794405-m03)       <source network='default'/>
	I0729 17:51:14.312498  105708 main.go:141] libmachine: (ha-794405-m03)       <model type='virtio'/>
	I0729 17:51:14.312506  105708 main.go:141] libmachine: (ha-794405-m03)     </interface>
	I0729 17:51:14.312513  105708 main.go:141] libmachine: (ha-794405-m03)     <serial type='pty'>
	I0729 17:51:14.312521  105708 main.go:141] libmachine: (ha-794405-m03)       <target port='0'/>
	I0729 17:51:14.312528  105708 main.go:141] libmachine: (ha-794405-m03)     </serial>
	I0729 17:51:14.312563  105708 main.go:141] libmachine: (ha-794405-m03)     <console type='pty'>
	I0729 17:51:14.312584  105708 main.go:141] libmachine: (ha-794405-m03)       <target type='serial' port='0'/>
	I0729 17:51:14.312597  105708 main.go:141] libmachine: (ha-794405-m03)     </console>
	I0729 17:51:14.312606  105708 main.go:141] libmachine: (ha-794405-m03)     <rng model='virtio'>
	I0729 17:51:14.312617  105708 main.go:141] libmachine: (ha-794405-m03)       <backend model='random'>/dev/random</backend>
	I0729 17:51:14.312626  105708 main.go:141] libmachine: (ha-794405-m03)     </rng>
	I0729 17:51:14.312633  105708 main.go:141] libmachine: (ha-794405-m03)     
	I0729 17:51:14.312642  105708 main.go:141] libmachine: (ha-794405-m03)     
	I0729 17:51:14.312650  105708 main.go:141] libmachine: (ha-794405-m03)   </devices>
	I0729 17:51:14.312660  105708 main.go:141] libmachine: (ha-794405-m03) </domain>
	I0729 17:51:14.312686  105708 main.go:141] libmachine: (ha-794405-m03) 
	I0729 17:51:14.319904  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:ea:ab:24 in network default
	I0729 17:51:14.320556  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:14.320575  105708 main.go:141] libmachine: (ha-794405-m03) Ensuring networks are active...
	I0729 17:51:14.321415  105708 main.go:141] libmachine: (ha-794405-m03) Ensuring network default is active
	I0729 17:51:14.321796  105708 main.go:141] libmachine: (ha-794405-m03) Ensuring network mk-ha-794405 is active
	I0729 17:51:14.322436  105708 main.go:141] libmachine: (ha-794405-m03) Getting domain xml...
	I0729 17:51:14.323225  105708 main.go:141] libmachine: (ha-794405-m03) Creating domain...
	I0729 17:51:14.709258  105708 main.go:141] libmachine: (ha-794405-m03) Waiting to get IP...
	I0729 17:51:14.709927  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:14.710387  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:14.710412  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:14.710360  106467 retry.go:31] will retry after 248.338118ms: waiting for machine to come up
	I0729 17:51:14.960853  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:14.961324  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:14.961348  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:14.961283  106467 retry.go:31] will retry after 340.428087ms: waiting for machine to come up
	I0729 17:51:15.303827  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:15.304407  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:15.304427  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:15.304331  106467 retry.go:31] will retry after 410.973841ms: waiting for machine to come up
	I0729 17:51:15.716804  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:15.717300  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:15.717332  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:15.717250  106467 retry.go:31] will retry after 410.507652ms: waiting for machine to come up
	I0729 17:51:16.129586  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:16.130099  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:16.130127  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:16.130057  106467 retry.go:31] will retry after 580.57811ms: waiting for machine to come up
	I0729 17:51:16.711744  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:16.712255  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:16.712288  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:16.712210  106467 retry.go:31] will retry after 726.962476ms: waiting for machine to come up
	I0729 17:51:17.440785  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:17.441299  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:17.441327  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:17.441251  106467 retry.go:31] will retry after 1.017586827s: waiting for machine to come up
	I0729 17:51:18.460466  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:18.460923  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:18.460952  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:18.460877  106467 retry.go:31] will retry after 921.419747ms: waiting for machine to come up
	I0729 17:51:19.384477  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:19.385037  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:19.385065  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:19.384979  106467 retry.go:31] will retry after 1.55396863s: waiting for machine to come up
	I0729 17:51:20.940699  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:20.941124  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:20.941156  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:20.941069  106467 retry.go:31] will retry after 1.592103368s: waiting for machine to come up
	I0729 17:51:22.535925  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:22.536388  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:22.536420  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:22.536336  106467 retry.go:31] will retry after 1.758793191s: waiting for machine to come up
	I0729 17:51:24.296892  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:24.297388  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:24.297419  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:24.297339  106467 retry.go:31] will retry after 2.570205531s: waiting for machine to come up
	I0729 17:51:26.869801  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:26.870190  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:26.870210  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:26.870167  106467 retry.go:31] will retry after 4.232098911s: waiting for machine to come up
	I0729 17:51:31.103439  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:31.103900  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find current IP address of domain ha-794405-m03 in network mk-ha-794405
	I0729 17:51:31.103930  105708 main.go:141] libmachine: (ha-794405-m03) DBG | I0729 17:51:31.103843  106467 retry.go:31] will retry after 5.307752085s: waiting for machine to come up
	I0729 17:51:36.414191  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.414633  105708 main.go:141] libmachine: (ha-794405-m03) Found IP for machine: 192.168.39.185
	I0729 17:51:36.414655  105708 main.go:141] libmachine: (ha-794405-m03) Reserving static IP address...
	I0729 17:51:36.414664  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has current primary IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.414997  105708 main.go:141] libmachine: (ha-794405-m03) DBG | unable to find host DHCP lease matching {name: "ha-794405-m03", mac: "52:54:00:6d:a7:17", ip: "192.168.39.185"} in network mk-ha-794405
	I0729 17:51:36.488205  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Getting to WaitForSSH function...
	I0729 17:51:36.488236  105708 main.go:141] libmachine: (ha-794405-m03) Reserved static IP address: 192.168.39.185
	I0729 17:51:36.488248  105708 main.go:141] libmachine: (ha-794405-m03) Waiting for SSH to be available...
	I0729 17:51:36.490876  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.491269  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:36.491303  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.491518  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Using SSH client type: external
	I0729 17:51:36.491547  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa (-rw-------)
	I0729 17:51:36.491581  105708 main.go:141] libmachine: (ha-794405-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 17:51:36.491595  105708 main.go:141] libmachine: (ha-794405-m03) DBG | About to run SSH command:
	I0729 17:51:36.491618  105708 main.go:141] libmachine: (ha-794405-m03) DBG | exit 0
	I0729 17:51:36.612830  105708 main.go:141] libmachine: (ha-794405-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 17:51:36.613119  105708 main.go:141] libmachine: (ha-794405-m03) KVM machine creation complete!
	I0729 17:51:36.613488  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetConfigRaw
	I0729 17:51:36.613983  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:36.614189  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:36.614354  105708 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 17:51:36.614367  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:51:36.615674  105708 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 17:51:36.615687  105708 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 17:51:36.615692  105708 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 17:51:36.615699  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:36.618113  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.618448  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:36.618474  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.618652  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:36.618844  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.618985  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.619096  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:36.619214  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:36.619400  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:36.619412  105708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 17:51:36.719979  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:51:36.720005  105708 main.go:141] libmachine: Detecting the provisioner...
	I0729 17:51:36.720017  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:36.722991  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.723398  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:36.723425  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.723601  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:36.723807  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.723981  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.724109  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:36.724286  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:36.724471  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:36.724487  105708 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 17:51:36.825657  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 17:51:36.825720  105708 main.go:141] libmachine: found compatible host: buildroot
	I0729 17:51:36.825731  105708 main.go:141] libmachine: Provisioning with buildroot...
	I0729 17:51:36.825739  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetMachineName
	I0729 17:51:36.826037  105708 buildroot.go:166] provisioning hostname "ha-794405-m03"
	I0729 17:51:36.826070  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetMachineName
	I0729 17:51:36.826288  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:36.829124  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.829573  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:36.829604  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.829739  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:36.829908  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.830079  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.830243  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:36.830406  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:36.830585  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:36.830600  105708 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-794405-m03 && echo "ha-794405-m03" | sudo tee /etc/hostname
	I0729 17:51:36.949282  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-794405-m03
	
	I0729 17:51:36.949307  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:36.952008  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.952366  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:36.952394  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:36.952586  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:36.952765  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.952932  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:36.953080  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:36.953277  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:36.953449  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:36.953471  105708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-794405-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-794405-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-794405-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:51:37.063551  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:51:37.063595  105708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 17:51:37.063612  105708 buildroot.go:174] setting up certificates
	I0729 17:51:37.063620  105708 provision.go:84] configureAuth start
	I0729 17:51:37.063629  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetMachineName
	I0729 17:51:37.063905  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:51:37.066402  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.066730  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.066760  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.066922  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.068894  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.069229  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.069255  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.069378  105708 provision.go:143] copyHostCerts
	I0729 17:51:37.069418  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:51:37.069458  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 17:51:37.069468  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:51:37.069551  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 17:51:37.069643  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:51:37.069669  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 17:51:37.069676  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:51:37.069713  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 17:51:37.069783  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:51:37.069809  105708 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 17:51:37.069825  105708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:51:37.069864  105708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 17:51:37.069936  105708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.ha-794405-m03 san=[127.0.0.1 192.168.39.185 ha-794405-m03 localhost minikube]
	I0729 17:51:37.123476  105708 provision.go:177] copyRemoteCerts
	I0729 17:51:37.123537  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:51:37.123565  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.125942  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.126301  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.126333  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.126470  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.126672  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.126853  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.126985  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:51:37.208668  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:51:37.208731  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:51:37.232392  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:51:37.232463  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 17:51:37.257307  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:51:37.257370  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 17:51:37.281279  105708 provision.go:87] duration metric: took 217.645775ms to configureAuth
	I0729 17:51:37.281307  105708 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:51:37.281534  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:51:37.281623  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.285007  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.285479  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.285506  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.285699  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.285883  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.286059  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.286202  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.286423  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:37.286604  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:37.286635  105708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 17:51:37.554130  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 17:51:37.554162  105708 main.go:141] libmachine: Checking connection to Docker...
	I0729 17:51:37.554172  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetURL
	I0729 17:51:37.555534  105708 main.go:141] libmachine: (ha-794405-m03) DBG | Using libvirt version 6000000
	I0729 17:51:37.557565  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.558027  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.558054  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.558190  105708 main.go:141] libmachine: Docker is up and running!
	I0729 17:51:37.558202  105708 main.go:141] libmachine: Reticulating splines...
	I0729 17:51:37.558210  105708 client.go:171] duration metric: took 23.767149838s to LocalClient.Create
	I0729 17:51:37.558240  105708 start.go:167] duration metric: took 23.767212309s to libmachine.API.Create "ha-794405"
	I0729 17:51:37.558258  105708 start.go:293] postStartSetup for "ha-794405-m03" (driver="kvm2")
	I0729 17:51:37.558273  105708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 17:51:37.558293  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:37.558577  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 17:51:37.558609  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.561019  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.561387  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.561414  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.561589  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.561756  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.561897  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.562016  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:51:37.642877  105708 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 17:51:37.646900  105708 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 17:51:37.646923  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 17:51:37.646990  105708 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 17:51:37.647083  105708 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 17:51:37.647094  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /etc/ssl/certs/952822.pem
	I0729 17:51:37.647196  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 17:51:37.656067  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:51:37.681469  105708 start.go:296] duration metric: took 123.19384ms for postStartSetup
	I0729 17:51:37.681525  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetConfigRaw
	I0729 17:51:37.682212  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:51:37.685029  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.685398  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.685419  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.685709  105708 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:51:37.685928  105708 start.go:128] duration metric: took 23.914423367s to createHost
	I0729 17:51:37.685951  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.688346  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.688655  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.688684  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.688812  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.688991  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.689106  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.689289  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.689463  105708 main.go:141] libmachine: Using SSH client type: native
	I0729 17:51:37.689659  105708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0729 17:51:37.689669  105708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 17:51:37.793585  105708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722275497.769611233
	
	I0729 17:51:37.793609  105708 fix.go:216] guest clock: 1722275497.769611233
	I0729 17:51:37.793619  105708 fix.go:229] Guest: 2024-07-29 17:51:37.769611233 +0000 UTC Remote: 2024-07-29 17:51:37.685940461 +0000 UTC m=+154.895501561 (delta=83.670772ms)
	I0729 17:51:37.793642  105708 fix.go:200] guest clock delta is within tolerance: 83.670772ms
	I0729 17:51:37.793650  105708 start.go:83] releasing machines lock for "ha-794405-m03", held for 24.022296869s
	I0729 17:51:37.793674  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:37.793974  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:51:37.796625  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.797098  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.797127  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.799788  105708 out.go:177] * Found network options:
	I0729 17:51:37.801153  105708 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.62
	W0729 17:51:37.802278  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 17:51:37.802299  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:51:37.802315  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:37.802912  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:37.803108  105708 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:51:37.803214  105708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 17:51:37.803250  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	W0729 17:51:37.803324  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 17:51:37.803346  105708 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 17:51:37.803414  105708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 17:51:37.803431  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:51:37.806156  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.806537  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.806561  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.806581  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.806722  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.806896  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.807016  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:37.807041  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.807048  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:37.807187  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:51:37.807226  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:51:37.807385  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:51:37.807524  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:51:37.807688  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:51:38.034803  105708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 17:51:38.041903  105708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 17:51:38.041984  105708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 17:51:38.060208  105708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 17:51:38.060235  105708 start.go:495] detecting cgroup driver to use...
	I0729 17:51:38.060294  105708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 17:51:38.076360  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 17:51:38.089724  105708 docker.go:217] disabling cri-docker service (if available) ...
	I0729 17:51:38.089783  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 17:51:38.102853  105708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 17:51:38.116385  105708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 17:51:38.229756  105708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 17:51:38.404745  105708 docker.go:233] disabling docker service ...
	I0729 17:51:38.404834  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 17:51:38.419584  105708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 17:51:38.433372  105708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 17:51:38.544792  105708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 17:51:38.653054  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 17:51:38.667071  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 17:51:38.687105  105708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 17:51:38.687173  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.699331  105708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 17:51:38.699397  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.711428  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.722969  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.734580  105708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 17:51:38.746232  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.757995  105708 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.776224  105708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 17:51:38.788146  105708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 17:51:38.798705  105708 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 17:51:38.798757  105708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 17:51:38.811479  105708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 17:51:38.820984  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:51:38.941667  105708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 17:51:39.085748  105708 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 17:51:39.085852  105708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 17:51:39.091014  105708 start.go:563] Will wait 60s for crictl version
	I0729 17:51:39.091076  105708 ssh_runner.go:195] Run: which crictl
	I0729 17:51:39.095007  105708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 17:51:39.139907  105708 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 17:51:39.139989  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:51:39.168090  105708 ssh_runner.go:195] Run: crio --version
	I0729 17:51:39.200299  105708 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 17:51:39.201714  105708 out.go:177]   - env NO_PROXY=192.168.39.102
	I0729 17:51:39.202982  105708 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.62
	I0729 17:51:39.204237  105708 main.go:141] libmachine: (ha-794405-m03) Calling .GetIP
	I0729 17:51:39.207379  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:39.207858  105708 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:51:39.207897  105708 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:51:39.208155  105708 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 17:51:39.213137  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:51:39.225413  105708 mustload.go:65] Loading cluster: ha-794405
	I0729 17:51:39.225634  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:51:39.225892  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:51:39.225934  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:51:39.241561  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43309
	I0729 17:51:39.241969  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:51:39.242481  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:51:39.242502  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:51:39.242835  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:51:39.243022  105708 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:51:39.244548  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:51:39.244834  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:51:39.244891  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:51:39.259364  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0729 17:51:39.259878  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:51:39.260357  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:51:39.260378  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:51:39.260707  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:51:39.260915  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:51:39.261071  105708 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405 for IP: 192.168.39.185
	I0729 17:51:39.261084  105708 certs.go:194] generating shared ca certs ...
	I0729 17:51:39.261101  105708 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:51:39.261221  105708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 17:51:39.261269  105708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 17:51:39.261282  105708 certs.go:256] generating profile certs ...
	I0729 17:51:39.261387  105708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key
	I0729 17:51:39.261418  105708 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.0f64bc4c
	I0729 17:51:39.261438  105708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.0f64bc4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.62 192.168.39.185 192.168.39.254]
	I0729 17:51:39.370954  105708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.0f64bc4c ...
	I0729 17:51:39.370983  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.0f64bc4c: {Name:mk9ad2699a6f08d6feea0804a30182c285b135b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:51:39.371165  105708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.0f64bc4c ...
	I0729 17:51:39.371181  105708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.0f64bc4c: {Name:mk1edda8ff2e7a1dff1452cad9bc647746822586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:51:39.371289  105708 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.0f64bc4c -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt
	I0729 17:51:39.371449  105708 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.0f64bc4c -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key
	I0729 17:51:39.371619  105708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key
	I0729 17:51:39.371640  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 17:51:39.371658  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 17:51:39.371678  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 17:51:39.371695  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 17:51:39.371712  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 17:51:39.371727  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 17:51:39.371743  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 17:51:39.371761  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 17:51:39.371827  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 17:51:39.371868  105708 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 17:51:39.371881  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 17:51:39.371917  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 17:51:39.371948  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 17:51:39.371988  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 17:51:39.372044  105708 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 17:51:39.372082  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem -> /usr/share/ca-certificates/95282.pem
	I0729 17:51:39.372108  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /usr/share/ca-certificates/952822.pem
	I0729 17:51:39.372123  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:39.372165  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:51:39.375170  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:51:39.375646  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:51:39.375674  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:51:39.375915  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:51:39.376114  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:51:39.376271  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:51:39.376402  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:51:39.449248  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 17:51:39.454254  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 17:51:39.465664  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 17:51:39.469745  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 17:51:39.482969  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 17:51:39.487408  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 17:51:39.500935  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 17:51:39.505908  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 17:51:39.516676  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 17:51:39.520797  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 17:51:39.530928  105708 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 17:51:39.535723  105708 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 17:51:39.546854  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 17:51:39.575157  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 17:51:39.602960  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 17:51:39.627624  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 17:51:39.654674  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 17:51:39.681302  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 17:51:39.706741  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 17:51:39.730706  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 17:51:39.753580  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 17:51:39.779188  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 17:51:39.805025  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 17:51:39.830566  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 17:51:39.848010  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 17:51:39.865383  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 17:51:39.882453  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 17:51:39.898993  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 17:51:39.914624  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 17:51:39.930487  105708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 17:51:39.946333  105708 ssh_runner.go:195] Run: openssl version
	I0729 17:51:39.951926  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 17:51:39.962653  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 17:51:39.967172  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 17:51:39.967217  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 17:51:39.973243  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 17:51:39.985022  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 17:51:39.995057  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:39.999521  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:39.999576  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 17:51:40.005332  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 17:51:40.015845  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 17:51:40.025936  105708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 17:51:40.030310  105708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 17:51:40.030361  105708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 17:51:40.036076  105708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 17:51:40.047264  105708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 17:51:40.051418  105708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 17:51:40.051478  105708 kubeadm.go:934] updating node {m03 192.168.39.185 8443 v1.30.3 crio true true} ...
	I0729 17:51:40.051600  105708 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-794405-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 17:51:40.051637  105708 kube-vip.go:115] generating kube-vip config ...
	I0729 17:51:40.051681  105708 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 17:51:40.067051  105708 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 17:51:40.067116  105708 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 17:51:40.067181  105708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 17:51:40.077259  105708 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 17:51:40.077323  105708 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 17:51:40.087388  105708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 17:51:40.087413  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:51:40.087455  105708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 17:51:40.087489  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 17:51:40.087496  105708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 17:51:40.087531  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:51:40.087506  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:51:40.087616  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 17:51:40.092281  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 17:51:40.092305  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 17:51:40.131874  105708 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:51:40.131903  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 17:51:40.131927  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 17:51:40.131977  105708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 17:51:40.184392  105708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 17:51:40.184448  105708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 17:51:41.009843  105708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 17:51:41.019819  105708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 17:51:41.036516  105708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 17:51:41.053300  105708 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 17:51:41.070512  105708 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 17:51:41.075014  105708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 17:51:41.088092  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:51:41.226113  105708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:51:41.245974  105708 host.go:66] Checking if "ha-794405" exists ...
	I0729 17:51:41.246427  105708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:51:41.246487  105708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:51:41.262609  105708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0729 17:51:41.263056  105708 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:51:41.263676  105708 main.go:141] libmachine: Using API Version  1
	I0729 17:51:41.263704  105708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:51:41.264057  105708 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:51:41.264285  105708 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:51:41.264449  105708 start.go:317] joinCluster: &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:51:41.264625  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 17:51:41.264651  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:51:41.267557  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:51:41.268013  105708 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:51:41.268047  105708 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:51:41.268162  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:51:41.268342  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:51:41.268472  105708 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:51:41.268607  105708 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:51:41.440958  105708 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:51:41.441015  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t2ykit.l2mn21qacn94oqux --discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-794405-m03 --control-plane --apiserver-advertise-address=192.168.39.185 --apiserver-bind-port=8443"
	I0729 17:52:05.729150  105708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t2ykit.l2mn21qacn94oqux --discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-794405-m03 --control-plane --apiserver-advertise-address=192.168.39.185 --apiserver-bind-port=8443": (24.288102608s)
	I0729 17:52:05.729199  105708 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 17:52:06.400473  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-794405-m03 minikube.k8s.io/updated_at=2024_07_29T17_52_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=ha-794405 minikube.k8s.io/primary=false
	I0729 17:52:06.547141  105708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-794405-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 17:52:06.684118  105708 start.go:319] duration metric: took 25.41966317s to joinCluster
	I0729 17:52:06.684219  105708 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 17:52:06.684723  105708 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:52:06.685937  105708 out.go:177] * Verifying Kubernetes components...
	I0729 17:52:06.687299  105708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 17:52:07.001516  105708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 17:52:07.092644  105708 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:52:07.092905  105708 kapi.go:59] client config for ha-794405: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.crt", KeyFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key", CAFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 17:52:07.092977  105708 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I0729 17:52:07.093351  105708 node_ready.go:35] waiting up to 6m0s for node "ha-794405-m03" to be "Ready" ...
	I0729 17:52:07.093460  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:07.093471  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:07.093481  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:07.093488  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:07.096691  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:07.593951  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:07.593975  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:07.593983  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:07.593987  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:07.597596  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:08.094137  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:08.094163  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:08.094174  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:08.094181  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:08.098001  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:08.594166  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:08.594193  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:08.594205  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:08.594210  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:08.597318  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:09.093727  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:09.093752  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:09.093758  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:09.093761  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:09.096800  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:09.097526  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:09.593931  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:09.593951  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:09.593959  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:09.593964  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:09.598145  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:52:10.093753  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:10.093779  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:10.093791  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:10.093801  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:10.098019  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:52:10.594395  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:10.594423  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:10.594434  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:10.594440  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:10.598134  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:11.094379  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:11.094407  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:11.094419  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:11.094425  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:11.098039  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:11.098742  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:11.594240  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:11.594271  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:11.594283  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:11.594291  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:11.597458  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:12.093653  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:12.093679  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:12.093689  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:12.093693  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:12.097391  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:12.593808  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:12.593835  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:12.593844  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:12.593848  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:12.597483  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:13.094127  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:13.094149  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:13.094156  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:13.094161  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:13.097539  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:13.594152  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:13.594180  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:13.594193  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:13.594197  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:13.600588  105708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 17:52:13.601209  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:14.093641  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:14.093663  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:14.093671  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:14.093680  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:14.096907  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:14.593508  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:14.593533  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:14.593543  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:14.593548  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:14.596723  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:15.093697  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:15.093720  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:15.093728  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:15.093732  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:15.097273  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:15.593620  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:15.593651  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:15.593663  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:15.593668  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:15.596952  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:16.093834  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:16.093858  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:16.093866  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:16.093870  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:16.097198  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:16.098052  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:16.593735  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:16.593758  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:16.593767  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:16.593772  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:16.596889  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:17.094160  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:17.094186  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:17.094197  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:17.094204  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:17.097538  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:17.594488  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:17.594515  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:17.594523  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:17.594526  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:17.597661  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:18.094116  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:18.094141  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:18.094151  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:18.094156  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:18.097888  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:18.098539  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:18.593933  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:18.593958  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:18.593971  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:18.593975  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:18.597907  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:19.094256  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:19.094288  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:19.094301  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:19.094306  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:19.098574  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:52:19.594100  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:19.594122  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:19.594130  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:19.594135  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:19.597121  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:20.094163  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:20.094185  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:20.094193  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:20.094199  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:20.099057  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:52:20.099921  105708 node_ready.go:53] node "ha-794405-m03" has status "Ready":"False"
	I0729 17:52:20.594118  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:20.594140  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:20.594149  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:20.594154  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:20.597180  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:21.094340  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:21.094365  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:21.094374  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:21.094378  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:21.097640  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:21.594113  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:21.594136  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:21.594144  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:21.594147  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:21.597402  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.094481  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:22.094508  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.094518  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.094522  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.107733  105708 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 17:52:22.108412  105708 node_ready.go:49] node "ha-794405-m03" has status "Ready":"True"
	I0729 17:52:22.108441  105708 node_ready.go:38] duration metric: took 15.015062151s for node "ha-794405-m03" to be "Ready" ...
	I0729 17:52:22.108452  105708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:52:22.108533  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:52:22.108546  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.108556  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.108560  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.115703  105708 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 17:52:22.122388  105708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.122477  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bb2jg
	I0729 17:52:22.122486  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.122494  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.122497  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.125882  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.126777  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:22.126791  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.126798  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.126801  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.129232  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.129664  105708 pod_ready.go:92] pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.129681  105708 pod_ready.go:81] duration metric: took 7.267572ms for pod "coredns-7db6d8ff4d-bb2jg" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.129689  105708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.129737  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nzvff
	I0729 17:52:22.129744  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.129751  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.129756  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.133407  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.134013  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:22.134030  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.134037  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.134043  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.136873  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.137286  105708 pod_ready.go:92] pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.137305  105708 pod_ready.go:81] duration metric: took 7.608491ms for pod "coredns-7db6d8ff4d-nzvff" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.137316  105708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.137369  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405
	I0729 17:52:22.137379  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.137389  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.137395  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.140251  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.141219  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:22.141232  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.141238  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.141244  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.144019  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.144818  105708 pod_ready.go:92] pod "etcd-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.144833  105708 pod_ready.go:81] duration metric: took 7.510577ms for pod "etcd-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.144840  105708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.144907  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m02
	I0729 17:52:22.144917  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.144923  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.144927  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.147860  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.148905  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:22.148921  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.148931  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.148938  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.150970  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.151405  105708 pod_ready.go:92] pod "etcd-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.151423  105708 pod_ready.go:81] duration metric: took 6.576669ms for pod "etcd-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.151434  105708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.294790  105708 request.go:629] Waited for 143.290566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m03
	I0729 17:52:22.294876  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-794405-m03
	I0729 17:52:22.294887  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.294898  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.294907  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.297667  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:22.494604  105708 request.go:629] Waited for 196.288993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:22.494664  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:22.494669  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.494677  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.494682  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.498015  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.498640  105708 pod_ready.go:92] pod "etcd-ha-794405-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.498662  105708 pod_ready.go:81] duration metric: took 347.221622ms for pod "etcd-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.498685  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.694620  105708 request.go:629] Waited for 195.855925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405
	I0729 17:52:22.694692  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405
	I0729 17:52:22.694697  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.694704  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.694710  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.697741  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.894865  105708 request.go:629] Waited for 196.229078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:22.894930  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:22.894936  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:22.894948  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:22.894955  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:22.898028  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:22.898788  105708 pod_ready.go:92] pod "kube-apiserver-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:22.898807  105708 pod_ready.go:81] duration metric: took 400.109837ms for pod "kube-apiserver-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:22.898827  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:23.095419  105708 request.go:629] Waited for 196.501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405-m02
	I0729 17:52:23.095642  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405-m02
	I0729 17:52:23.095669  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:23.095681  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:23.095693  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:23.098878  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:23.294916  105708 request.go:629] Waited for 195.278918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:23.294979  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:23.294987  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:23.294996  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:23.295002  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:23.298687  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:23.299396  105708 pod_ready.go:92] pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:23.299426  105708 pod_ready.go:81] duration metric: took 400.589256ms for pod "kube-apiserver-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:23.299439  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:23.495317  105708 request.go:629] Waited for 195.767589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405-m03
	I0729 17:52:23.495395  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405-m03
	I0729 17:52:23.495405  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:23.495417  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:23.495425  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:23.499174  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:23.694651  105708 request.go:629] Waited for 193.281404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:23.694722  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:23.694727  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:23.694735  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:23.694740  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:23.698483  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:23.699565  105708 pod_ready.go:92] pod "kube-apiserver-ha-794405-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:23.699585  105708 pod_ready.go:81] duration metric: took 400.13736ms for pod "kube-apiserver-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:23.699601  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:23.895283  105708 request.go:629] Waited for 195.596381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405
	I0729 17:52:23.895360  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405
	I0729 17:52:23.895366  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:23.895374  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:23.895378  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:23.898525  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:24.094774  105708 request.go:629] Waited for 195.35988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:24.094846  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:24.094855  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:24.094865  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:24.094876  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:24.097820  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:24.098509  105708 pod_ready.go:92] pod "kube-controller-manager-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:24.098528  105708 pod_ready.go:81] duration metric: took 398.913833ms for pod "kube-controller-manager-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:24.098538  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:24.294502  105708 request.go:629] Waited for 195.889611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m02
	I0729 17:52:24.294562  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m02
	I0729 17:52:24.294567  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:24.294574  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:24.294582  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:24.297602  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:24.494783  105708 request.go:629] Waited for 196.364553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:24.494844  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:24.494849  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:24.494857  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:24.494862  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:24.498051  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:24.498652  105708 pod_ready.go:92] pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:24.498678  105708 pod_ready.go:81] duration metric: took 400.133287ms for pod "kube-controller-manager-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:24.498694  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:24.694575  105708 request.go:629] Waited for 195.792594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m03
	I0729 17:52:24.694669  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-794405-m03
	I0729 17:52:24.694678  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:24.694689  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:24.694698  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:24.698084  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:24.895177  105708 request.go:629] Waited for 196.401878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:24.895252  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:24.895263  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:24.895301  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:24.895310  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:24.898701  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:24.899355  105708 pod_ready.go:92] pod "kube-controller-manager-ha-794405-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:24.899374  105708 pod_ready.go:81] duration metric: took 400.671302ms for pod "kube-controller-manager-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:24.899383  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-llkz8" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:25.095483  105708 request.go:629] Waited for 196.033676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llkz8
	I0729 17:52:25.095585  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-llkz8
	I0729 17:52:25.095596  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:25.095607  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:25.095613  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:25.098769  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:25.294943  105708 request.go:629] Waited for 195.360516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:25.295029  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:25.295034  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:25.295042  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:25.295049  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:25.297909  105708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 17:52:25.298495  105708 pod_ready.go:92] pod "kube-proxy-llkz8" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:25.298518  105708 pod_ready.go:81] duration metric: took 399.128803ms for pod "kube-proxy-llkz8" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:25.298527  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ndmlm" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:25.494555  105708 request.go:629] Waited for 195.94168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndmlm
	I0729 17:52:25.494659  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndmlm
	I0729 17:52:25.494666  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:25.494674  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:25.494678  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:25.498225  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:25.695461  105708 request.go:629] Waited for 196.323528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:25.695517  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:25.695521  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:25.695529  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:25.695534  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:25.698829  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:25.699491  105708 pod_ready.go:92] pod "kube-proxy-ndmlm" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:25.699512  105708 pod_ready.go:81] duration metric: took 400.977802ms for pod "kube-proxy-ndmlm" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:25.699524  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qcmxl" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:25.894477  105708 request.go:629] Waited for 194.854751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qcmxl
	I0729 17:52:25.894569  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qcmxl
	I0729 17:52:25.894612  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:25.894623  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:25.894629  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:25.898150  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:26.095284  105708 request.go:629] Waited for 196.396948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:26.095358  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:26.095366  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:26.095377  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:26.095388  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:26.098864  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:26.099520  105708 pod_ready.go:92] pod "kube-proxy-qcmxl" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:26.099556  105708 pod_ready.go:81] duration metric: took 400.017239ms for pod "kube-proxy-qcmxl" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:26.099565  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:26.295195  105708 request.go:629] Waited for 195.560076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405
	I0729 17:52:26.295273  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405
	I0729 17:52:26.295280  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:26.295288  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:26.295293  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:26.298472  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:26.494543  105708 request.go:629] Waited for 195.281031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:26.494623  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405
	I0729 17:52:26.494632  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:26.494642  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:26.494647  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:26.498204  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:26.498710  105708 pod_ready.go:92] pod "kube-scheduler-ha-794405" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:26.498732  105708 pod_ready.go:81] duration metric: took 399.158818ms for pod "kube-scheduler-ha-794405" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:26.498746  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:26.694837  105708 request.go:629] Waited for 195.973722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m02
	I0729 17:52:26.694908  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m02
	I0729 17:52:26.694915  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:26.694925  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:26.694932  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:26.698462  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:26.895254  105708 request.go:629] Waited for 195.851427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:26.895307  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m02
	I0729 17:52:26.895314  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:26.895324  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:26.895331  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:26.899943  105708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 17:52:26.900594  105708 pod_ready.go:92] pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:26.900616  105708 pod_ready.go:81] duration metric: took 401.864196ms for pod "kube-scheduler-ha-794405-m02" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:26.900626  105708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:27.095062  105708 request.go:629] Waited for 194.356554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m03
	I0729 17:52:27.095119  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-794405-m03
	I0729 17:52:27.095124  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.095132  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.095138  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.098295  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:27.295286  105708 request.go:629] Waited for 196.364582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:27.295340  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-794405-m03
	I0729 17:52:27.295345  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.295352  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.295356  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.298568  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:27.299084  105708 pod_ready.go:92] pod "kube-scheduler-ha-794405-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 17:52:27.299104  105708 pod_ready.go:81] duration metric: took 398.469732ms for pod "kube-scheduler-ha-794405-m03" in "kube-system" namespace to be "Ready" ...
	I0729 17:52:27.299114  105708 pod_ready.go:38] duration metric: took 5.190649862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 17:52:27.299130  105708 api_server.go:52] waiting for apiserver process to appear ...
	I0729 17:52:27.299188  105708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 17:52:27.316096  105708 api_server.go:72] duration metric: took 20.631831701s to wait for apiserver process to appear ...
	I0729 17:52:27.316122  105708 api_server.go:88] waiting for apiserver healthz status ...
	I0729 17:52:27.316146  105708 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0729 17:52:27.320502  105708 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0729 17:52:27.320588  105708 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I0729 17:52:27.320599  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.320609  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.320622  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.321551  105708 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 17:52:27.321626  105708 api_server.go:141] control plane version: v1.30.3
	I0729 17:52:27.321645  105708 api_server.go:131] duration metric: took 5.514184ms to wait for apiserver health ...
	I0729 17:52:27.321656  105708 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 17:52:27.495031  105708 request.go:629] Waited for 173.277349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:52:27.495091  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:52:27.495096  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.495103  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.495109  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.503688  105708 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 17:52:27.509944  105708 system_pods.go:59] 24 kube-system pods found
	I0729 17:52:27.509972  105708 system_pods.go:61] "coredns-7db6d8ff4d-bb2jg" [ee9ad335-25b2-4e6c-a523-47b06ce713dc] Running
	I0729 17:52:27.509976  105708 system_pods.go:61] "coredns-7db6d8ff4d-nzvff" [b1e2c116-2549-4e1a-8d79-cd86595db9f3] Running
	I0729 17:52:27.509980  105708 system_pods.go:61] "etcd-ha-794405" [95b13c7f-e6bf-4225-a00b-de1fefb711ac] Running
	I0729 17:52:27.509984  105708 system_pods.go:61] "etcd-ha-794405-m02" [5c99a8df-66da-45b0-b62a-30e17113a35d] Running
	I0729 17:52:27.509987  105708 system_pods.go:61] "etcd-ha-794405-m03" [96db3933-6f55-4e09-8d3b-8e5ea049e182] Running
	I0729 17:52:27.509992  105708 system_pods.go:61] "kindnet-8qgq5" [0b2b707a-4283-41cd-9f3b-83b6f2d169cf] Running
	I0729 17:52:27.509996  105708 system_pods.go:61] "kindnet-g2qqp" [c4a0c764-368c-4059-be5b-ff49aa48f5af] Running
	I0729 17:52:27.510001  105708 system_pods.go:61] "kindnet-j4l89" [c0b81d74-531b-4878-84ea-654e7b57f0ba] Running
	I0729 17:52:27.510005  105708 system_pods.go:61] "kube-apiserver-ha-794405" [32e07436-25ae-4f51-b0e6-004ed954d8b7] Running
	I0729 17:52:27.510013  105708 system_pods.go:61] "kube-apiserver-ha-794405-m02" [0e7afc5b-62cf-4874-ade9-1df7a6c3926d] Running
	I0729 17:52:27.510018  105708 system_pods.go:61] "kube-apiserver-ha-794405-m03" [f4e70efe-e9bb-4157-9bdc-c69c621a4a9f] Running
	I0729 17:52:27.510024  105708 system_pods.go:61] "kube-controller-manager-ha-794405" [c2d30a4e-d2bc-4221-b8d9-98b67932e4ab] Running
	I0729 17:52:27.510031  105708 system_pods.go:61] "kube-controller-manager-ha-794405-m02" [88e040cd-4d7f-4a2e-97cb-4f24249d1c82] Running
	I0729 17:52:27.510039  105708 system_pods.go:61] "kube-controller-manager-ha-794405-m03" [bc163b01-3b26-4102-99c7-57070c064741] Running
	I0729 17:52:27.510043  105708 system_pods.go:61] "kube-proxy-llkz8" [95536eff-3f12-4a7e-9504-c8f6b1acc4cb] Running
	I0729 17:52:27.510047  105708 system_pods.go:61] "kube-proxy-ndmlm" [e49d3ffa-561a-4fee-9438-79bd64eaa77e] Running
	I0729 17:52:27.510050  105708 system_pods.go:61] "kube-proxy-qcmxl" [963fc8f8-3080-4602-9437-d2060c7ea622] Running
	I0729 17:52:27.510053  105708 system_pods.go:61] "kube-scheduler-ha-794405" [b41eb8ec-bf30-4cc6-b454-28305ccf70b5] Running
	I0729 17:52:27.510058  105708 system_pods.go:61] "kube-scheduler-ha-794405-m02" [8a0bfe0d-b80a-4799-8371-84041938bf1d] Running
	I0729 17:52:27.510061  105708 system_pods.go:61] "kube-scheduler-ha-794405-m03" [a04e274d-fa85-48c1-b346-5abc439b1caa] Running
	I0729 17:52:27.510064  105708 system_pods.go:61] "kube-vip-ha-794405" [0e782ab8-0d52-4894-b003-493294ab4710] Running
	I0729 17:52:27.510067  105708 system_pods.go:61] "kube-vip-ha-794405-m02" [f7e2f40a-28ab-4655-a82c-11df8cf806d5] Running
	I0729 17:52:27.510072  105708 system_pods.go:61] "kube-vip-ha-794405-m03" [c6cf8681-5029-4139-b6f5-9c72e1a186a7] Running
	I0729 17:52:27.510075  105708 system_pods.go:61] "storage-provisioner" [0e08d093-f8b5-4614-9be2-5832f7cafa75] Running
	I0729 17:52:27.510080  105708 system_pods.go:74] duration metric: took 188.415985ms to wait for pod list to return data ...
	I0729 17:52:27.510089  105708 default_sa.go:34] waiting for default service account to be created ...
	I0729 17:52:27.695511  105708 request.go:629] Waited for 185.340573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:52:27.695572  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I0729 17:52:27.695577  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.695585  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.695589  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.698868  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:27.699001  105708 default_sa.go:45] found service account: "default"
	I0729 17:52:27.699016  105708 default_sa.go:55] duration metric: took 188.920373ms for default service account to be created ...
	I0729 17:52:27.699025  105708 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 17:52:27.895459  105708 request.go:629] Waited for 196.359512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:52:27.895551  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I0729 17:52:27.895559  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:27.895567  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:27.895571  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:27.902023  105708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 17:52:27.908310  105708 system_pods.go:86] 24 kube-system pods found
	I0729 17:52:27.908337  105708 system_pods.go:89] "coredns-7db6d8ff4d-bb2jg" [ee9ad335-25b2-4e6c-a523-47b06ce713dc] Running
	I0729 17:52:27.908343  105708 system_pods.go:89] "coredns-7db6d8ff4d-nzvff" [b1e2c116-2549-4e1a-8d79-cd86595db9f3] Running
	I0729 17:52:27.908347  105708 system_pods.go:89] "etcd-ha-794405" [95b13c7f-e6bf-4225-a00b-de1fefb711ac] Running
	I0729 17:52:27.908352  105708 system_pods.go:89] "etcd-ha-794405-m02" [5c99a8df-66da-45b0-b62a-30e17113a35d] Running
	I0729 17:52:27.908356  105708 system_pods.go:89] "etcd-ha-794405-m03" [96db3933-6f55-4e09-8d3b-8e5ea049e182] Running
	I0729 17:52:27.908360  105708 system_pods.go:89] "kindnet-8qgq5" [0b2b707a-4283-41cd-9f3b-83b6f2d169cf] Running
	I0729 17:52:27.908364  105708 system_pods.go:89] "kindnet-g2qqp" [c4a0c764-368c-4059-be5b-ff49aa48f5af] Running
	I0729 17:52:27.908368  105708 system_pods.go:89] "kindnet-j4l89" [c0b81d74-531b-4878-84ea-654e7b57f0ba] Running
	I0729 17:52:27.908372  105708 system_pods.go:89] "kube-apiserver-ha-794405" [32e07436-25ae-4f51-b0e6-004ed954d8b7] Running
	I0729 17:52:27.908377  105708 system_pods.go:89] "kube-apiserver-ha-794405-m02" [0e7afc5b-62cf-4874-ade9-1df7a6c3926d] Running
	I0729 17:52:27.908381  105708 system_pods.go:89] "kube-apiserver-ha-794405-m03" [f4e70efe-e9bb-4157-9bdc-c69c621a4a9f] Running
	I0729 17:52:27.908386  105708 system_pods.go:89] "kube-controller-manager-ha-794405" [c2d30a4e-d2bc-4221-b8d9-98b67932e4ab] Running
	I0729 17:52:27.908390  105708 system_pods.go:89] "kube-controller-manager-ha-794405-m02" [88e040cd-4d7f-4a2e-97cb-4f24249d1c82] Running
	I0729 17:52:27.908394  105708 system_pods.go:89] "kube-controller-manager-ha-794405-m03" [bc163b01-3b26-4102-99c7-57070c064741] Running
	I0729 17:52:27.908398  105708 system_pods.go:89] "kube-proxy-llkz8" [95536eff-3f12-4a7e-9504-c8f6b1acc4cb] Running
	I0729 17:52:27.908402  105708 system_pods.go:89] "kube-proxy-ndmlm" [e49d3ffa-561a-4fee-9438-79bd64eaa77e] Running
	I0729 17:52:27.908409  105708 system_pods.go:89] "kube-proxy-qcmxl" [963fc8f8-3080-4602-9437-d2060c7ea622] Running
	I0729 17:52:27.908413  105708 system_pods.go:89] "kube-scheduler-ha-794405" [b41eb8ec-bf30-4cc6-b454-28305ccf70b5] Running
	I0729 17:52:27.908416  105708 system_pods.go:89] "kube-scheduler-ha-794405-m02" [8a0bfe0d-b80a-4799-8371-84041938bf1d] Running
	I0729 17:52:27.908420  105708 system_pods.go:89] "kube-scheduler-ha-794405-m03" [a04e274d-fa85-48c1-b346-5abc439b1caa] Running
	I0729 17:52:27.908424  105708 system_pods.go:89] "kube-vip-ha-794405" [0e782ab8-0d52-4894-b003-493294ab4710] Running
	I0729 17:52:27.908427  105708 system_pods.go:89] "kube-vip-ha-794405-m02" [f7e2f40a-28ab-4655-a82c-11df8cf806d5] Running
	I0729 17:52:27.908430  105708 system_pods.go:89] "kube-vip-ha-794405-m03" [c6cf8681-5029-4139-b6f5-9c72e1a186a7] Running
	I0729 17:52:27.908434  105708 system_pods.go:89] "storage-provisioner" [0e08d093-f8b5-4614-9be2-5832f7cafa75] Running
	I0729 17:52:27.908440  105708 system_pods.go:126] duration metric: took 209.410233ms to wait for k8s-apps to be running ...
	I0729 17:52:27.908451  105708 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 17:52:27.908496  105708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 17:52:27.924491  105708 system_svc.go:56] duration metric: took 16.032013ms WaitForService to wait for kubelet
	I0729 17:52:27.924520  105708 kubeadm.go:582] duration metric: took 21.240258453s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:52:27.924538  105708 node_conditions.go:102] verifying NodePressure condition ...
	I0729 17:52:28.095243  105708 request.go:629] Waited for 170.622148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I0729 17:52:28.095344  105708 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I0729 17:52:28.095362  105708 round_trippers.go:469] Request Headers:
	I0729 17:52:28.095373  105708 round_trippers.go:473]     Accept: application/json, */*
	I0729 17:52:28.095383  105708 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 17:52:28.098922  105708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 17:52:28.100208  105708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:52:28.100233  105708 node_conditions.go:123] node cpu capacity is 2
	I0729 17:52:28.100244  105708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:52:28.100248  105708 node_conditions.go:123] node cpu capacity is 2
	I0729 17:52:28.100251  105708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 17:52:28.100254  105708 node_conditions.go:123] node cpu capacity is 2
	I0729 17:52:28.100258  105708 node_conditions.go:105] duration metric: took 175.716329ms to run NodePressure ...
	I0729 17:52:28.100269  105708 start.go:241] waiting for startup goroutines ...
	I0729 17:52:28.100289  105708 start.go:255] writing updated cluster config ...
	I0729 17:52:28.100595  105708 ssh_runner.go:195] Run: rm -f paused
	I0729 17:52:28.154674  105708 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 17:52:28.156740  105708 out.go:177] * Done! kubectl is now configured to use "ha-794405" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.416459832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275824416435634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe3327a7-a852-450a-b8ec-12944297a104 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.417101992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71f89aed-e306-4d7b-81d2-09f63cc8dceb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.417152973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71f89aed-e306-4d7b-81d2-09f63cc8dceb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.417425292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275551524991054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cd9159e204634d6ac3d51e419d539364b61d6e875ff4a12637862c389aadc97,PodSandboxId:240fbb16ebb18d19e584558e2991c6aa9018969c0c29e9cd1972b1748d356979,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275416635155029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416642459474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416592553577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25
b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722275404641520344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227540
1434014276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7e5300596ed794752e31ceb8cb03e339b3ea52305f02c83e577378000130f,PodSandboxId:7510b2d9ade47876437c644891b0e0b0f683f3900c72a8708be8436cad9710de,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222753829
69924866,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d374c3f980522c4e4148a3ee91a62ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a,PodSandboxId:65aead88c888b23fc47036ea29d01bdcd48fa0b7b69073b480b5ea4ab84eef2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275381102711071,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e,PodSandboxId:2d88c70ad1fa5ef38fb6bebafbf2c938058f1611d73b528edfcef167ecc0db56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275381088832898,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275381004903660,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275380962157338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71f89aed-e306-4d7b-81d2-09f63cc8dceb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.458499669Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4cf1ca72-a0ca-492d-9c2b-8101eb62618b name=/runtime.v1.RuntimeService/Version
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.458568896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4cf1ca72-a0ca-492d-9c2b-8101eb62618b name=/runtime.v1.RuntimeService/Version
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.459562734Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f7362d4-4597-4d15-ba1b-13d1b4db2608 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.459977457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275824459957551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f7362d4-4597-4d15-ba1b-13d1b4db2608 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.460532707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d64180af-4e3e-48dc-8753-40b5ad3f787b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.460584166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d64180af-4e3e-48dc-8753-40b5ad3f787b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.460829166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275551524991054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cd9159e204634d6ac3d51e419d539364b61d6e875ff4a12637862c389aadc97,PodSandboxId:240fbb16ebb18d19e584558e2991c6aa9018969c0c29e9cd1972b1748d356979,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275416635155029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416642459474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416592553577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25
b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722275404641520344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227540
1434014276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7e5300596ed794752e31ceb8cb03e339b3ea52305f02c83e577378000130f,PodSandboxId:7510b2d9ade47876437c644891b0e0b0f683f3900c72a8708be8436cad9710de,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222753829
69924866,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d374c3f980522c4e4148a3ee91a62ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a,PodSandboxId:65aead88c888b23fc47036ea29d01bdcd48fa0b7b69073b480b5ea4ab84eef2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275381102711071,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e,PodSandboxId:2d88c70ad1fa5ef38fb6bebafbf2c938058f1611d73b528edfcef167ecc0db56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275381088832898,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275381004903660,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275380962157338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d64180af-4e3e-48dc-8753-40b5ad3f787b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.498628957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6dd7c50-d099-4247-81eb-b433fd8e6ab9 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.498737269Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6dd7c50-d099-4247-81eb-b433fd8e6ab9 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.500071705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88f4df8f-7b8b-4270-893e-f4b8a8ace2e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.500593695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275824500569674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88f4df8f-7b8b-4270-893e-f4b8a8ace2e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.501139622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4196a5f2-666c-4028-a8b1-45ea1843e518 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.501194565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4196a5f2-666c-4028-a8b1-45ea1843e518 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.501567372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275551524991054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cd9159e204634d6ac3d51e419d539364b61d6e875ff4a12637862c389aadc97,PodSandboxId:240fbb16ebb18d19e584558e2991c6aa9018969c0c29e9cd1972b1748d356979,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275416635155029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416642459474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416592553577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25
b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722275404641520344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227540
1434014276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7e5300596ed794752e31ceb8cb03e339b3ea52305f02c83e577378000130f,PodSandboxId:7510b2d9ade47876437c644891b0e0b0f683f3900c72a8708be8436cad9710de,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222753829
69924866,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d374c3f980522c4e4148a3ee91a62ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a,PodSandboxId:65aead88c888b23fc47036ea29d01bdcd48fa0b7b69073b480b5ea4ab84eef2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275381102711071,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e,PodSandboxId:2d88c70ad1fa5ef38fb6bebafbf2c938058f1611d73b528edfcef167ecc0db56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275381088832898,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275381004903660,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275380962157338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4196a5f2-666c-4028-a8b1-45ea1843e518 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.537318845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aaff663e-8e0d-4603-956b-51ff49952a78 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.537466308Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aaff663e-8e0d-4603-956b-51ff49952a78 name=/runtime.v1.RuntimeService/Version
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.538625492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1847096d-e4cf-4f50-a903-75742f9648aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.539457922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722275824539072725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1847096d-e4cf-4f50-a903-75742f9648aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.540208389Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb148174-fb91-446e-a6c3-f0aefe7fc1c7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.540265249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb148174-fb91-446e-a6c3-f0aefe7fc1c7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 17:57:04 ha-794405 crio[683]: time="2024-07-29 17:57:04.540743429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722275551524991054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cd9159e204634d6ac3d51e419d539364b61d6e875ff4a12637862c389aadc97,PodSandboxId:240fbb16ebb18d19e584558e2991c6aa9018969c0c29e9cd1972b1748d356979,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722275416635155029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416642459474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722275416592553577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25
b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722275404641520344,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227540
1434014276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7e5300596ed794752e31ceb8cb03e339b3ea52305f02c83e577378000130f,PodSandboxId:7510b2d9ade47876437c644891b0e0b0f683f3900c72a8708be8436cad9710de,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222753829
69924866,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d374c3f980522c4e4148a3ee91a62ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a,PodSandboxId:65aead88c888b23fc47036ea29d01bdcd48fa0b7b69073b480b5ea4ab84eef2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722275381102711071,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e,PodSandboxId:2d88c70ad1fa5ef38fb6bebafbf2c938058f1611d73b528edfcef167ecc0db56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722275381088832898,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722275381004903660,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722275380962157338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernete
s.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb148174-fb91-446e-a6c3-f0aefe7fc1c7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	882dc7ddd36ca       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   030fd183fc5d7       busybox-fc5497c4f-9t4xg
	34646ba311f51       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0a85f31b7216e       coredns-7db6d8ff4d-nzvff
	9cd9159e20463       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   240fbb16ebb18       storage-provisioner
	11e098645d7d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   c21b66fe5a20a       coredns-7db6d8ff4d-bb2jg
	5005f4869048e       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   a04c14b520cac       kindnet-j4l89
	2992a8242c5e7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   afea598394fc6       kube-proxy-llkz8
	83c7e5300596e       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   7510b2d9ade47       kube-vip-ha-794405
	152a9fa24ee44       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   65aead88c888b       kube-controller-manager-ha-794405
	985c673864e1a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   2d88c70ad1fa5       kube-apiserver-ha-794405
	fca3429715988       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   da888d4d893d6       kube-scheduler-ha-794405
	e224997d35927       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   a93bf9947672a       etcd-ha-794405
	
	
	==> coredns [11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d] <==
	[INFO] 10.244.1.2:40259 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000163059s
	[INFO] 10.244.2.2:53496 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193518s
	[INFO] 10.244.2.2:55534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010566736s
	[INFO] 10.244.2.2:40585 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234026s
	[INFO] 10.244.2.2:49780 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172106s
	[INFO] 10.244.0.4:57455 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010734s
	[INFO] 10.244.0.4:49757 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134817s
	[INFO] 10.244.0.4:34537 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083091s
	[INFO] 10.244.0.4:59243 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094884s
	[INFO] 10.244.0.4:32813 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194094s
	[INFO] 10.244.1.2:51380 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001717695s
	[INFO] 10.244.1.2:41977 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084863s
	[INFO] 10.244.1.2:45990 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090641s
	[INFO] 10.244.1.2:55905 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128239s
	[INFO] 10.244.1.2:57839 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092047s
	[INFO] 10.244.0.4:52553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155036s
	[INFO] 10.244.0.4:60833 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116165s
	[INFO] 10.244.0.4:58984 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096169s
	[INFO] 10.244.1.2:56581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099926s
	[INFO] 10.244.2.2:47299 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251364s
	[INFO] 10.244.2.2:54140 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131767s
	[INFO] 10.244.0.4:37906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128168s
	[INFO] 10.244.0.4:53897 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128545s
	[INFO] 10.244.0.4:42232 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175859s
	[INFO] 10.244.1.2:58375 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000225865s
	
	
	==> coredns [34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca] <==
	[INFO] 10.244.1.2:54634 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001797843s
	[INFO] 10.244.2.2:47599 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241341s
	[INFO] 10.244.2.2:54826 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003913926s
	[INFO] 10.244.2.2:38410 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162546s
	[INFO] 10.244.2.2:58834 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129002s
	[INFO] 10.244.0.4:49557 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090727s
	[INFO] 10.244.0.4:33820 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001835803s
	[INFO] 10.244.0.4:39762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001456019s
	[INFO] 10.244.1.2:49407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010484s
	[INFO] 10.244.1.2:41901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153055s
	[INFO] 10.244.1.2:46891 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001271955s
	[INFO] 10.244.2.2:49560 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127808s
	[INFO] 10.244.2.2:56119 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007809s
	[INFO] 10.244.2.2:38291 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002272s
	[INFO] 10.244.2.2:47373 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074396s
	[INFO] 10.244.0.4:48660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051359s
	[INFO] 10.244.1.2:45618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016309s
	[INFO] 10.244.1.2:34022 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090959s
	[INFO] 10.244.1.2:55925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000187604s
	[INFO] 10.244.2.2:52948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132206s
	[INFO] 10.244.2.2:50512 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133066s
	[INFO] 10.244.0.4:56090 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011653s
	[INFO] 10.244.1.2:53420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109055s
	[INFO] 10.244.1.2:51296 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101897s
	[INFO] 10.244.1.2:36056 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072778s
	
	
	==> describe nodes <==
	Name:               ha-794405
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_49_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:49:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:56:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:52:50 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:52:50 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:52:50 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:52:50 +0000   Mon, 29 Jul 2024 17:50:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-794405
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f5d049fcd1645d38ff56c6e587d83f8
	  System UUID:                4f5d049f-cd16-45d3-8ff5-6c6e587d83f8
	  Boot ID:                    a36bbb12-7ddf-423d-b68c-d781a4b4af75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9t4xg              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 coredns-7db6d8ff4d-bb2jg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m4s
	  kube-system                 coredns-7db6d8ff4d-nzvff             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m4s
	  kube-system                 etcd-ha-794405                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m17s
	  kube-system                 kindnet-j4l89                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m4s
	  kube-system                 kube-apiserver-ha-794405             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-controller-manager-ha-794405    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-proxy-llkz8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 kube-scheduler-ha-794405             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-vip-ha-794405                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m2s   kube-proxy       
	  Normal  Starting                 7m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m17s  kubelet          Node ha-794405 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m17s  kubelet          Node ha-794405 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m17s  kubelet          Node ha-794405 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m4s   node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal  NodeReady                6m48s  kubelet          Node ha-794405 status is now: NodeReady
	  Normal  RegisteredNode           5m56s  node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal  RegisteredNode           4m44s  node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	
	
	Name:               ha-794405-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_50_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:50:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:53:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 17:52:52 +0000   Mon, 29 Jul 2024 17:54:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 17:52:52 +0000   Mon, 29 Jul 2024 17:54:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 17:52:52 +0000   Mon, 29 Jul 2024 17:54:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 17:52:52 +0000   Mon, 29 Jul 2024 17:54:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-794405-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 437dda8ebd384bf294c14831928d98f5
	  System UUID:                437dda8e-bd38-4bf2-94c1-4831928d98f5
	  Boot ID:                    8dac2304-3043-4420-be7b-4720ee3f4a37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kq6g2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 etcd-ha-794405-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m14s
	  kube-system                 kindnet-8qgq5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m15s
	  kube-system                 kube-apiserver-ha-794405-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-controller-manager-ha-794405-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-proxy-qcmxl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-scheduler-ha-794405-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-vip-ha-794405-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m11s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m15s (x8 over 6m16s)  kubelet          Node ha-794405-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s (x8 over 6m16s)  kubelet          Node ha-794405-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s (x7 over 6m16s)  kubelet          Node ha-794405-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           5m56s                  node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  NodeNotReady             2m39s                  node-controller  Node ha-794405-m02 status is now: NodeNotReady
	
	
	Name:               ha-794405-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_52_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:52:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:56:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:52:32 +0000   Mon, 29 Jul 2024 17:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:52:32 +0000   Mon, 29 Jul 2024 17:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:52:32 +0000   Mon, 29 Jul 2024 17:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:52:32 +0000   Mon, 29 Jul 2024 17:52:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-794405-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7788bd32e72d421d86476277253535d2
	  System UUID:                7788bd32-e72d-421d-8647-6277253535d2
	  Boot ID:                    99ed6d55-8112-4d56-83c8-983b813fa1bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8xr2r                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 etcd-ha-794405-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m1s
	  kube-system                 kindnet-g2qqp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m3s
	  kube-system                 kube-apiserver-ha-794405-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-controller-manager-ha-794405-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-ndmlm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-scheduler-ha-794405-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-vip-ha-794405-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m3s (x8 over 5m3s)  kubelet          Node ha-794405-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s (x8 over 5m3s)  kubelet          Node ha-794405-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s (x7 over 5m3s)  kubelet          Node ha-794405-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	  Normal  RegisteredNode           4m59s                node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	  Normal  RegisteredNode           4m44s                node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	
	
	Name:               ha-794405-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_53_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 17:56:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 17:53:35 +0000   Mon, 29 Jul 2024 17:53:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 17:53:35 +0000   Mon, 29 Jul 2024 17:53:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 17:53:35 +0000   Mon, 29 Jul 2024 17:53:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 17:53:35 +0000   Mon, 29 Jul 2024 17:53:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    ha-794405-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2eee0b726b504b318de9dcda1a6d7202
	  System UUID:                2eee0b72-6b50-4b31-8de9-dcda1a6d7202
	  Boot ID:                    8afc220b-a697-4b20-991b-858204b503d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ndgvz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m
	  kube-system                 kube-proxy-nrw9z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 3m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  4m (x2 over 4m)  kubelet          Node ha-794405-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x2 over 4m)  kubelet          Node ha-794405-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x2 over 4m)  kubelet          Node ha-794405-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s            node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal  RegisteredNode           3m59s            node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal  RegisteredNode           3m56s            node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal  NodeReady                3m41s            kubelet          Node ha-794405-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 17:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049871] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040156] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.724474] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.475771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.618211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.653696] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.053781] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058152] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.186373] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.123683] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.267498] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.093512] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.553872] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.061033] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.996135] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.105049] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[Jul29 17:50] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.275633] kauditd_printk_skb: 38 callbacks suppressed
	[ +40.101588] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e] <==
	{"level":"warn","ts":"2024-07-29T17:57:04.792616Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.794535Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.800781Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.804448Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.81573Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.822721Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.829116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.832831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.835822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.842973Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.848823Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.850489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.85563Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.858949Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.861877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.86947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.882738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.902997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.908332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.913136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.923695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.932259Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.940089Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.951106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T17:57:04.994041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:57:05 up 7 min,  0 users,  load average: 0.07, 0.23, 0.16
	Linux ha-794405 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5] <==
	I0729 17:56:25.715114       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:56:35.712797       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 17:56:35.712910       1 main.go:299] handling current node
	I0729 17:56:35.712952       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:56:35.712989       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:56:35.713178       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:56:35.713222       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 17:56:35.713305       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:56:35.713325       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:56:45.706051       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 17:56:45.706193       1 main.go:299] handling current node
	I0729 17:56:45.706228       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:56:45.706246       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:56:45.706449       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:56:45.706479       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 17:56:45.706595       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:56:45.706616       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:56:55.705610       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:56:55.705730       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:56:55.705975       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 17:56:55.706020       1 main.go:299] handling current node
	I0729 17:56:55.706048       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:56:55.706055       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:56:55.706178       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:56:55.706212       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e] <==
	I0729 17:49:47.297566       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 17:49:47.314065       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 17:49:47.463932       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 17:50:00.559990       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 17:50:00.764278       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0729 17:52:02.764919       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0729 17:52:02.765219       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0729 17:52:02.765665       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 398.449µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0729 17:52:02.766888       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0729 17:52:02.767054       1 timeout.go:142] post-timeout activity - time-elapsed: 2.784713ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0729 17:52:32.629890       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43196: use of closed network connection
	E0729 17:52:32.828114       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43206: use of closed network connection
	E0729 17:52:33.022775       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43220: use of closed network connection
	E0729 17:52:33.208138       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43250: use of closed network connection
	E0729 17:52:33.396698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43270: use of closed network connection
	E0729 17:52:33.591298       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43292: use of closed network connection
	E0729 17:52:33.759147       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43300: use of closed network connection
	E0729 17:52:33.941785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43312: use of closed network connection
	E0729 17:52:34.115769       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43328: use of closed network connection
	E0729 17:52:34.403229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43352: use of closed network connection
	E0729 17:52:34.577470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43368: use of closed network connection
	E0729 17:52:34.756509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43394: use of closed network connection
	E0729 17:52:35.108999       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43424: use of closed network connection
	E0729 17:52:35.285603       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43428: use of closed network connection
	W0729 17:53:55.769696       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.185]
	
	
	==> kube-controller-manager [152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a] <==
	I0729 17:52:29.351801       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.306µs"
	I0729 17:52:29.352301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.836µs"
	I0729 17:52:29.352888       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.273µs"
	I0729 17:52:29.468088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.39351ms"
	I0729 17:52:29.468289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.648µs"
	I0729 17:52:30.154671       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.487µs"
	I0729 17:52:30.168073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.181µs"
	I0729 17:52:30.190577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.195µs"
	I0729 17:52:30.206651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.85µs"
	I0729 17:52:30.229419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.558µs"
	I0729 17:52:30.247923       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.998µs"
	I0729 17:52:31.361672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.355758ms"
	I0729 17:52:31.363088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.773µs"
	I0729 17:52:31.619902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.939654ms"
	I0729 17:52:31.620005       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.5µs"
	I0729 17:52:32.173953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.036769ms"
	I0729 17:52:32.174218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="129.3µs"
	E0729 17:53:04.379805       1 certificate_controller.go:146] Sync csr-2lzzf failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2lzzf": the object has been modified; please apply your changes to the latest version and try again
	I0729 17:53:04.624477       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-794405-m04\" does not exist"
	I0729 17:53:04.713030       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-794405-m04" podCIDRs=["10.244.3.0/24"]
	I0729 17:53:05.800264       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-794405-m04"
	I0729 17:53:23.171449       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-794405-m04"
	I0729 17:54:25.829806       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-794405-m04"
	I0729 17:54:26.038165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.586432ms"
	I0729 17:54:26.038304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.486µs"
	
	
	==> kube-proxy [2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f] <==
	I0729 17:50:01.835652       1 server_linux.go:69] "Using iptables proxy"
	I0729 17:50:01.858952       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	I0729 17:50:01.952952       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 17:50:01.953017       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 17:50:01.953035       1 server_linux.go:165] "Using iptables Proxier"
	I0729 17:50:01.958159       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 17:50:01.958471       1 server.go:872] "Version info" version="v1.30.3"
	I0729 17:50:01.958501       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 17:50:01.959959       1 config.go:101] "Starting endpoint slice config controller"
	I0729 17:50:01.960004       1 config.go:192] "Starting service config controller"
	I0729 17:50:01.960203       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 17:50:01.960205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 17:50:01.961278       1 config.go:319] "Starting node config controller"
	I0729 17:50:01.961285       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 17:50:02.061293       1 shared_informer.go:320] Caches are synced for service config
	I0729 17:50:02.061493       1 shared_informer.go:320] Caches are synced for node config
	I0729 17:50:02.061523       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25] <==
	W0729 17:49:45.647067       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:49:45.647163       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 17:49:48.625258       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 17:52:01.980170       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ndmlm\": pod kube-proxy-ndmlm is already assigned to node \"ha-794405-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ndmlm" node="ha-794405-m03"
	E0729 17:52:01.980690       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ndmlm\": pod kube-proxy-ndmlm is already assigned to node \"ha-794405-m03\"" pod="kube-system/kube-proxy-ndmlm"
	E0729 17:52:02.039551       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sw765\": pod kube-proxy-sw765 is already assigned to node \"ha-794405-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sw765" node="ha-794405-m03"
	E0729 17:52:02.039623       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 02b5f9f8-0406-4261-bd3b-7661ddc6ddd0(kube-system/kube-proxy-sw765) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sw765"
	E0729 17:52:02.039643       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sw765\": pod kube-proxy-sw765 is already assigned to node \"ha-794405-m03\"" pod="kube-system/kube-proxy-sw765"
	I0729 17:52:02.039657       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sw765" node="ha-794405-m03"
	E0729 17:53:04.694842       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nrw9z\": pod kube-proxy-nrw9z is already assigned to node \"ha-794405-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nrw9z" node="ha-794405-m04"
	E0729 17:53:04.695637       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bceaebd9-016e-4ebb-ae2e-b926486cde55(kube-system/kube-proxy-nrw9z) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nrw9z"
	E0729 17:53:04.695848       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nrw9z\": pod kube-proxy-nrw9z is already assigned to node \"ha-794405-m04\"" pod="kube-system/kube-proxy-nrw9z"
	I0729 17:53:04.695953       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nrw9z" node="ha-794405-m04"
	E0729 17:53:04.697141       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ndgvz\": pod kindnet-ndgvz is already assigned to node \"ha-794405-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ndgvz" node="ha-794405-m04"
	E0729 17:53:04.697804       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dac03401-2d2d-4972-b74f-cf1918668c7f(kube-system/kindnet-ndgvz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ndgvz"
	E0729 17:53:04.697917       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ndgvz\": pod kindnet-ndgvz is already assigned to node \"ha-794405-m04\"" pod="kube-system/kindnet-ndgvz"
	I0729 17:53:04.698038       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ndgvz" node="ha-794405-m04"
	E0729 17:53:04.863070       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tfmmp\": pod kube-proxy-tfmmp is already assigned to node \"ha-794405-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tfmmp" node="ha-794405-m04"
	E0729 17:53:04.863407       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d0afa891-9c8f-4853-947e-8772e52029d8(kube-system/kube-proxy-tfmmp) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tfmmp"
	E0729 17:53:04.863492       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tfmmp\": pod kube-proxy-tfmmp is already assigned to node \"ha-794405-m04\"" pod="kube-system/kube-proxy-tfmmp"
	I0729 17:53:04.863555       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tfmmp" node="ha-794405-m04"
	E0729 17:53:04.866462       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bkgfr\": pod kindnet-bkgfr is already assigned to node \"ha-794405-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bkgfr" node="ha-794405-m04"
	E0729 17:53:04.866574       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d2e31787-c905-4df5-9d46-7f0ceaf731e6(kube-system/kindnet-bkgfr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bkgfr"
	E0729 17:53:04.866597       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bkgfr\": pod kindnet-bkgfr is already assigned to node \"ha-794405-m04\"" pod="kube-system/kindnet-bkgfr"
	I0729 17:53:04.866691       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bkgfr" node="ha-794405-m04"
	
	
	==> kubelet <==
	Jul 29 17:52:47 ha-794405 kubelet[1375]: E0729 17:52:47.519288    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:52:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:52:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:52:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:52:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:53:47 ha-794405 kubelet[1375]: E0729 17:53:47.516416    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:53:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:53:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:53:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:53:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:54:47 ha-794405 kubelet[1375]: E0729 17:54:47.516417    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:54:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:54:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:54:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:54:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:55:47 ha-794405 kubelet[1375]: E0729 17:55:47.514834    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:55:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:55:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:55:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:55:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 17:56:47 ha-794405 kubelet[1375]: E0729 17:56:47.514140    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 17:56:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 17:56:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 17:56:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 17:56:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-794405 -n ha-794405
helpers_test.go:261: (dbg) Run:  kubectl --context ha-794405 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (57.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (375.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-794405 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-794405 -v=7 --alsologtostderr
E0729 17:58:18.903619   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:58:46.588466   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-794405 -v=7 --alsologtostderr: exit status 82 (2m1.767393189s)

                                                
                                                
-- stdout --
	* Stopping node "ha-794405-m04"  ...
	* Stopping node "ha-794405-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:57:06.371154  111467 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:57:06.371292  111467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:57:06.371301  111467 out.go:304] Setting ErrFile to fd 2...
	I0729 17:57:06.371305  111467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:57:06.371474  111467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:57:06.371670  111467 out.go:298] Setting JSON to false
	I0729 17:57:06.371761  111467 mustload.go:65] Loading cluster: ha-794405
	I0729 17:57:06.372102  111467 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:57:06.372222  111467 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:57:06.372399  111467 mustload.go:65] Loading cluster: ha-794405
	I0729 17:57:06.372527  111467 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:57:06.372555  111467 stop.go:39] StopHost: ha-794405-m04
	I0729 17:57:06.372943  111467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:06.372980  111467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:06.387613  111467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0729 17:57:06.388115  111467 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:06.388771  111467 main.go:141] libmachine: Using API Version  1
	I0729 17:57:06.388799  111467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:06.389218  111467 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:06.391355  111467 out.go:177] * Stopping node "ha-794405-m04"  ...
	I0729 17:57:06.392545  111467 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 17:57:06.392584  111467 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 17:57:06.392815  111467 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 17:57:06.392844  111467 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 17:57:06.395409  111467 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:57:06.395821  111467 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:52:49 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 17:57:06.395851  111467 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 17:57:06.396015  111467 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 17:57:06.396192  111467 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 17:57:06.396339  111467 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 17:57:06.396454  111467 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 17:57:06.483243  111467 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 17:57:06.535993  111467 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 17:57:06.589465  111467 main.go:141] libmachine: Stopping "ha-794405-m04"...
	I0729 17:57:06.589496  111467 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:57:06.591070  111467 main.go:141] libmachine: (ha-794405-m04) Calling .Stop
	I0729 17:57:06.594657  111467 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 0/120
	I0729 17:57:07.683560  111467 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 17:57:07.684943  111467 main.go:141] libmachine: Machine "ha-794405-m04" was stopped.
	I0729 17:57:07.684965  111467 stop.go:75] duration metric: took 1.292418588s to stop
	I0729 17:57:07.684989  111467 stop.go:39] StopHost: ha-794405-m03
	I0729 17:57:07.685329  111467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:57:07.685384  111467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:57:07.699981  111467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0729 17:57:07.700354  111467 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:57:07.700982  111467 main.go:141] libmachine: Using API Version  1
	I0729 17:57:07.701005  111467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:57:07.701413  111467 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:57:07.703658  111467 out.go:177] * Stopping node "ha-794405-m03"  ...
	I0729 17:57:07.704829  111467 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 17:57:07.704888  111467 main.go:141] libmachine: (ha-794405-m03) Calling .DriverName
	I0729 17:57:07.705128  111467 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 17:57:07.705151  111467 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHHostname
	I0729 17:57:07.708082  111467 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:57:07.708494  111467 main.go:141] libmachine: (ha-794405-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:a7:17", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:51:28 +0000 UTC Type:0 Mac:52:54:00:6d:a7:17 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-794405-m03 Clientid:01:52:54:00:6d:a7:17}
	I0729 17:57:07.708526  111467 main.go:141] libmachine: (ha-794405-m03) DBG | domain ha-794405-m03 has defined IP address 192.168.39.185 and MAC address 52:54:00:6d:a7:17 in network mk-ha-794405
	I0729 17:57:07.708693  111467 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHPort
	I0729 17:57:07.708906  111467 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHKeyPath
	I0729 17:57:07.709084  111467 main.go:141] libmachine: (ha-794405-m03) Calling .GetSSHUsername
	I0729 17:57:07.709226  111467 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m03/id_rsa Username:docker}
	I0729 17:57:07.793516  111467 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 17:57:07.846867  111467 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 17:57:07.900329  111467 main.go:141] libmachine: Stopping "ha-794405-m03"...
	I0729 17:57:07.900355  111467 main.go:141] libmachine: (ha-794405-m03) Calling .GetState
	I0729 17:57:07.901999  111467 main.go:141] libmachine: (ha-794405-m03) Calling .Stop
	I0729 17:57:07.905540  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 0/120
	I0729 17:57:08.907276  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 1/120
	I0729 17:57:09.908669  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 2/120
	I0729 17:57:10.910016  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 3/120
	I0729 17:57:11.911359  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 4/120
	I0729 17:57:12.913096  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 5/120
	I0729 17:57:13.914516  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 6/120
	I0729 17:57:14.915783  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 7/120
	I0729 17:57:15.917441  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 8/120
	I0729 17:57:16.919317  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 9/120
	I0729 17:57:17.921332  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 10/120
	I0729 17:57:18.922974  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 11/120
	I0729 17:57:19.924438  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 12/120
	I0729 17:57:20.926054  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 13/120
	I0729 17:57:21.927368  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 14/120
	I0729 17:57:22.928934  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 15/120
	I0729 17:57:23.930586  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 16/120
	I0729 17:57:24.931786  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 17/120
	I0729 17:57:25.933356  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 18/120
	I0729 17:57:26.934552  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 19/120
	I0729 17:57:27.936211  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 20/120
	I0729 17:57:28.937681  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 21/120
	I0729 17:57:29.939205  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 22/120
	I0729 17:57:30.940724  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 23/120
	I0729 17:57:31.941899  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 24/120
	I0729 17:57:32.943589  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 25/120
	I0729 17:57:33.945121  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 26/120
	I0729 17:57:34.946441  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 27/120
	I0729 17:57:35.947882  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 28/120
	I0729 17:57:36.949310  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 29/120
	I0729 17:57:37.950931  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 30/120
	I0729 17:57:38.952224  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 31/120
	I0729 17:57:39.953491  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 32/120
	I0729 17:57:40.954775  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 33/120
	I0729 17:57:41.956070  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 34/120
	I0729 17:57:42.957453  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 35/120
	I0729 17:57:43.959009  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 36/120
	I0729 17:57:44.960409  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 37/120
	I0729 17:57:45.961800  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 38/120
	I0729 17:57:46.963041  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 39/120
	I0729 17:57:47.964415  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 40/120
	I0729 17:57:48.965777  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 41/120
	I0729 17:57:49.967063  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 42/120
	I0729 17:57:50.968289  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 43/120
	I0729 17:57:51.969679  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 44/120
	I0729 17:57:52.971461  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 45/120
	I0729 17:57:53.972981  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 46/120
	I0729 17:57:54.974432  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 47/120
	I0729 17:57:55.975973  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 48/120
	I0729 17:57:56.977688  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 49/120
	I0729 17:57:57.979560  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 50/120
	I0729 17:57:58.980996  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 51/120
	I0729 17:57:59.982211  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 52/120
	I0729 17:58:00.983606  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 53/120
	I0729 17:58:01.985016  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 54/120
	I0729 17:58:02.986517  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 55/120
	I0729 17:58:03.988020  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 56/120
	I0729 17:58:04.989635  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 57/120
	I0729 17:58:05.991117  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 58/120
	I0729 17:58:06.992455  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 59/120
	I0729 17:58:07.994215  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 60/120
	I0729 17:58:08.995534  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 61/120
	I0729 17:58:09.997214  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 62/120
	I0729 17:58:10.998579  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 63/120
	I0729 17:58:12.000156  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 64/120
	I0729 17:58:13.001513  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 65/120
	I0729 17:58:14.002879  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 66/120
	I0729 17:58:15.004280  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 67/120
	I0729 17:58:16.006229  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 68/120
	I0729 17:58:17.007580  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 69/120
	I0729 17:58:18.009411  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 70/120
	I0729 17:58:19.010806  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 71/120
	I0729 17:58:20.012240  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 72/120
	I0729 17:58:21.013664  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 73/120
	I0729 17:58:22.015606  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 74/120
	I0729 17:58:23.017473  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 75/120
	I0729 17:58:24.018729  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 76/120
	I0729 17:58:25.020616  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 77/120
	I0729 17:58:26.022003  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 78/120
	I0729 17:58:27.023309  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 79/120
	I0729 17:58:28.025059  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 80/120
	I0729 17:58:29.026469  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 81/120
	I0729 17:58:30.028897  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 82/120
	I0729 17:58:31.030142  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 83/120
	I0729 17:58:32.031766  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 84/120
	I0729 17:58:33.033684  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 85/120
	I0729 17:58:34.035017  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 86/120
	I0729 17:58:35.036473  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 87/120
	I0729 17:58:36.038444  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 88/120
	I0729 17:58:37.039766  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 89/120
	I0729 17:58:38.041615  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 90/120
	I0729 17:58:39.043082  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 91/120
	I0729 17:58:40.045234  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 92/120
	I0729 17:58:41.046576  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 93/120
	I0729 17:58:42.047890  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 94/120
	I0729 17:58:43.049805  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 95/120
	I0729 17:58:44.051266  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 96/120
	I0729 17:58:45.052656  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 97/120
	I0729 17:58:46.054125  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 98/120
	I0729 17:58:47.055488  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 99/120
	I0729 17:58:48.057386  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 100/120
	I0729 17:58:49.058753  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 101/120
	I0729 17:58:50.060167  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 102/120
	I0729 17:58:51.061638  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 103/120
	I0729 17:58:52.063440  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 104/120
	I0729 17:58:53.065300  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 105/120
	I0729 17:58:54.066518  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 106/120
	I0729 17:58:55.068295  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 107/120
	I0729 17:58:56.069723  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 108/120
	I0729 17:58:57.071546  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 109/120
	I0729 17:58:58.073263  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 110/120
	I0729 17:58:59.074665  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 111/120
	I0729 17:59:00.075933  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 112/120
	I0729 17:59:01.077422  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 113/120
	I0729 17:59:02.078777  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 114/120
	I0729 17:59:03.080378  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 115/120
	I0729 17:59:04.082052  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 116/120
	I0729 17:59:05.083359  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 117/120
	I0729 17:59:06.085028  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 118/120
	I0729 17:59:07.087537  111467 main.go:141] libmachine: (ha-794405-m03) Waiting for machine to stop 119/120
	I0729 17:59:08.088397  111467 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 17:59:08.088465  111467 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 17:59:08.090285  111467 out.go:177] 
	W0729 17:59:08.091615  111467 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 17:59:08.091634  111467 out.go:239] * 
	* 
	W0729 17:59:08.094961  111467 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 17:59:08.096257  111467 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-794405 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-794405 --wait=true -v=7 --alsologtostderr
E0729 18:00:53.333789   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 18:02:16.380402   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 18:03:18.903237   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-794405 --wait=true -v=7 --alsologtostderr: (4m10.959600311s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-794405
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-794405 -n ha-794405
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-794405 logs -n 25: (1.730347996s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m02:/home/docker/cp-test_ha-794405-m03_ha-794405-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m02 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m03_ha-794405-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04:/home/docker/cp-test_ha-794405-m03_ha-794405-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m04 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m03_ha-794405-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp testdata/cp-test.txt                                                | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1800704997/001/cp-test_ha-794405-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405:/home/docker/cp-test_ha-794405-m04_ha-794405.txt                       |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405 sudo cat                                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405.txt                                 |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m02:/home/docker/cp-test_ha-794405-m04_ha-794405-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m02 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03:/home/docker/cp-test_ha-794405-m04_ha-794405-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m03 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-794405 node stop m02 -v=7                                                     | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-794405 node start m02 -v=7                                                    | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-794405 -v=7                                                           | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-794405 -v=7                                                                | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-794405 --wait=true -v=7                                                    | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:59 UTC | 29 Jul 24 18:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-794405                                                                | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 18:03 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:59:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:59:08.142945  111957 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:59:08.143224  111957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:59:08.143235  111957 out.go:304] Setting ErrFile to fd 2...
	I0729 17:59:08.143242  111957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:59:08.143449  111957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:59:08.143999  111957 out.go:298] Setting JSON to false
	I0729 17:59:08.144985  111957 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9668,"bootTime":1722266280,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:59:08.145046  111957 start.go:139] virtualization: kvm guest
	I0729 17:59:08.147872  111957 out.go:177] * [ha-794405] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:59:08.149368  111957 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 17:59:08.149418  111957 notify.go:220] Checking for updates...
	I0729 17:59:08.151939  111957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:59:08.153316  111957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:59:08.154892  111957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:59:08.156295  111957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:59:08.157559  111957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:59:08.159284  111957 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:59:08.159375  111957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:59:08.159857  111957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:59:08.159911  111957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:59:08.175361  111957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37831
	I0729 17:59:08.175864  111957 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:59:08.176422  111957 main.go:141] libmachine: Using API Version  1
	I0729 17:59:08.176444  111957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:59:08.176747  111957 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:59:08.176951  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:59:08.213586  111957 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 17:59:08.214993  111957 start.go:297] selected driver: kvm2
	I0729 17:59:08.215009  111957 start.go:901] validating driver "kvm2" against &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.179 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:59:08.215175  111957 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:59:08.215488  111957 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:59:08.215577  111957 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:59:08.230967  111957 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:59:08.231615  111957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:59:08.231648  111957 cni.go:84] Creating CNI manager for ""
	I0729 17:59:08.231656  111957 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 17:59:08.231733  111957 start.go:340] cluster config:
	{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.179 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:59:08.231900  111957 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:59:08.234226  111957 out.go:177] * Starting "ha-794405" primary control-plane node in "ha-794405" cluster
	I0729 17:59:08.235488  111957 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:59:08.235522  111957 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:59:08.235536  111957 cache.go:56] Caching tarball of preloaded images
	I0729 17:59:08.235614  111957 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:59:08.235625  111957 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:59:08.235760  111957 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:59:08.235964  111957 start.go:360] acquireMachinesLock for ha-794405: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:59:08.236016  111957 start.go:364] duration metric: took 32.947µs to acquireMachinesLock for "ha-794405"
	I0729 17:59:08.236036  111957 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:59:08.236047  111957 fix.go:54] fixHost starting: 
	I0729 17:59:08.236320  111957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:59:08.236358  111957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:59:08.251130  111957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I0729 17:59:08.251518  111957 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:59:08.252055  111957 main.go:141] libmachine: Using API Version  1
	I0729 17:59:08.252077  111957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:59:08.252405  111957 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:59:08.252609  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:59:08.252748  111957 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:59:08.254319  111957 fix.go:112] recreateIfNeeded on ha-794405: state=Running err=<nil>
	W0729 17:59:08.254336  111957 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:59:08.256275  111957 out.go:177] * Updating the running kvm2 "ha-794405" VM ...
	I0729 17:59:08.257481  111957 machine.go:94] provisionDockerMachine start ...
	I0729 17:59:08.257502  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:59:08.257699  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.259843  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.260219  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.260248  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.260383  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:59:08.260547  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.260706  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.260825  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:59:08.261042  111957 main.go:141] libmachine: Using SSH client type: native
	I0729 17:59:08.261233  111957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:59:08.261249  111957 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 17:59:08.366286  111957 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-794405
	
	I0729 17:59:08.366324  111957 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:59:08.366616  111957 buildroot.go:166] provisioning hostname "ha-794405"
	I0729 17:59:08.366649  111957 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:59:08.366911  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.369351  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.369736  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.369760  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.369904  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:59:08.370096  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.370248  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.370376  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:59:08.370563  111957 main.go:141] libmachine: Using SSH client type: native
	I0729 17:59:08.370787  111957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:59:08.370800  111957 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-794405 && echo "ha-794405" | sudo tee /etc/hostname
	I0729 17:59:08.490421  111957 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-794405
	
	I0729 17:59:08.490447  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.493087  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.493517  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.493559  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.493718  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:59:08.493901  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.494086  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.494246  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:59:08.494414  111957 main.go:141] libmachine: Using SSH client type: native
	I0729 17:59:08.494581  111957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:59:08.494598  111957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-794405' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-794405/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-794405' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:59:08.597829  111957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:59:08.597867  111957 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 17:59:08.597918  111957 buildroot.go:174] setting up certificates
	I0729 17:59:08.597931  111957 provision.go:84] configureAuth start
	I0729 17:59:08.597942  111957 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:59:08.598230  111957 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:59:08.600889  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.601231  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.601258  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.601421  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.603821  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.604215  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.604238  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.604406  111957 provision.go:143] copyHostCerts
	I0729 17:59:08.604441  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:59:08.604526  111957 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 17:59:08.604540  111957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:59:08.604622  111957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 17:59:08.604725  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:59:08.604753  111957 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 17:59:08.604772  111957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:59:08.604822  111957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 17:59:08.604908  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:59:08.604932  111957 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 17:59:08.604941  111957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:59:08.604979  111957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 17:59:08.605032  111957 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.ha-794405 san=[127.0.0.1 192.168.39.102 ha-794405 localhost minikube]
	I0729 17:59:08.702069  111957 provision.go:177] copyRemoteCerts
	I0729 17:59:08.702132  111957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:59:08.702154  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.704510  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.704814  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.704852  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.704994  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:59:08.705187  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.705373  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:59:08.705538  111957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:59:08.788219  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:59:08.788298  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:59:08.813392  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:59:08.813460  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 17:59:08.840257  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:59:08.840332  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:59:08.863830  111957 provision.go:87] duration metric: took 265.887585ms to configureAuth
	I0729 17:59:08.863850  111957 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:59:08.864066  111957 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:59:08.864152  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.866645  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.867028  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.867054  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.867214  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:59:08.867380  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.867537  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.867680  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:59:08.867833  111957 main.go:141] libmachine: Using SSH client type: native
	I0729 17:59:08.868008  111957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:59:08.868027  111957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:00:39.813265  111957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:00:39.813299  111957 machine.go:97] duration metric: took 1m31.555799087s to provisionDockerMachine
	I0729 18:00:39.813315  111957 start.go:293] postStartSetup for "ha-794405" (driver="kvm2")
	I0729 18:00:39.813331  111957 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:00:39.813367  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:39.813724  111957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:00:39.813759  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:00:39.817020  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:39.817525  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:39.817552  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:39.817716  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:00:39.817918  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:39.818094  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:00:39.818225  111957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 18:00:39.946580  111957 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:00:39.961958  111957 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:00:39.962000  111957 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:00:39.962068  111957 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:00:39.962188  111957 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:00:39.962205  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /etc/ssl/certs/952822.pem
	I0729 18:00:39.962330  111957 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:00:39.991306  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:00:40.047394  111957 start.go:296] duration metric: took 234.062439ms for postStartSetup
	I0729 18:00:40.047440  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:40.047791  111957 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 18:00:40.047835  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:00:40.050206  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.050710  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:40.050738  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.050896  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:00:40.051097  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:40.051300  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:00:40.051486  111957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	W0729 18:00:40.133134  111957 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 18:00:40.133162  111957 fix.go:56] duration metric: took 1m31.89711748s for fixHost
	I0729 18:00:40.133188  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:00:40.135605  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.135965  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:40.135997  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.136238  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:00:40.136460  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:40.136635  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:40.136748  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:00:40.136967  111957 main.go:141] libmachine: Using SSH client type: native
	I0729 18:00:40.137130  111957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 18:00:40.137142  111957 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:00:40.241437  111957 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722276040.206346539
	
	I0729 18:00:40.241461  111957 fix.go:216] guest clock: 1722276040.206346539
	I0729 18:00:40.241469  111957 fix.go:229] Guest: 2024-07-29 18:00:40.206346539 +0000 UTC Remote: 2024-07-29 18:00:40.133170141 +0000 UTC m=+92.025983091 (delta=73.176398ms)
	I0729 18:00:40.241490  111957 fix.go:200] guest clock delta is within tolerance: 73.176398ms
	I0729 18:00:40.241496  111957 start.go:83] releasing machines lock for "ha-794405", held for 1m32.005469225s
	I0729 18:00:40.241514  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:40.241789  111957 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 18:00:40.244372  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.244766  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:40.244799  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.244916  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:40.245443  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:40.245638  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:40.245769  111957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:00:40.245839  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:00:40.245854  111957 ssh_runner.go:195] Run: cat /version.json
	I0729 18:00:40.245872  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:00:40.248396  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.248690  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.248764  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:40.248791  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.248929  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:00:40.249133  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:40.249172  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:40.249197  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.249333  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:00:40.249428  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:00:40.249452  111957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 18:00:40.249576  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:40.249701  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:00:40.249872  111957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 18:00:40.348184  111957 ssh_runner.go:195] Run: systemctl --version
	I0729 18:00:40.354452  111957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:00:40.512704  111957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:00:40.519487  111957 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:00:40.519548  111957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:00:40.531469  111957 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 18:00:40.531493  111957 start.go:495] detecting cgroup driver to use...
	I0729 18:00:40.531566  111957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:00:40.551271  111957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:00:40.565558  111957 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:00:40.565608  111957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:00:40.579329  111957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:00:40.593470  111957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:00:40.756228  111957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:00:40.906230  111957 docker.go:233] disabling docker service ...
	I0729 18:00:40.906299  111957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:00:40.927321  111957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:00:40.940915  111957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:00:41.087497  111957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:00:41.230537  111957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:00:41.244388  111957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:00:41.263696  111957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:00:41.263762  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.274325  111957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:00:41.274395  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.285082  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.296112  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.307099  111957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:00:41.318164  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.328737  111957 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.340389  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.351200  111957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:00:41.360961  111957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:00:41.370832  111957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:00:41.512835  111957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:00:41.824008  111957 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:00:41.824080  111957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:00:41.829409  111957 start.go:563] Will wait 60s for crictl version
	I0729 18:00:41.829475  111957 ssh_runner.go:195] Run: which crictl
	I0729 18:00:41.833321  111957 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:00:41.873470  111957 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:00:41.873553  111957 ssh_runner.go:195] Run: crio --version
	I0729 18:00:41.917901  111957 ssh_runner.go:195] Run: crio --version
	I0729 18:00:41.946963  111957 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:00:41.948061  111957 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 18:00:41.950817  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:41.951203  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:41.951225  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:41.951437  111957 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:00:41.955836  111957 kubeadm.go:883] updating cluster {Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.179 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:00:41.955970  111957 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:00:41.956035  111957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:00:42.000669  111957 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:00:42.000691  111957 crio.go:433] Images already preloaded, skipping extraction
	I0729 18:00:42.000752  111957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:00:42.034072  111957 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:00:42.034103  111957 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:00:42.034122  111957 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0729 18:00:42.034255  111957 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-794405 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:00:42.034330  111957 ssh_runner.go:195] Run: crio config
	I0729 18:00:42.085918  111957 cni.go:84] Creating CNI manager for ""
	I0729 18:00:42.085939  111957 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 18:00:42.085952  111957 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:00:42.085974  111957 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-794405 NodeName:ha-794405 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:00:42.086138  111957 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-794405"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:00:42.086166  111957 kube-vip.go:115] generating kube-vip config ...
	I0729 18:00:42.086204  111957 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 18:00:42.098786  111957 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 18:00:42.098923  111957 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 18:00:42.098982  111957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:00:42.108676  111957 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:00:42.108739  111957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 18:00:42.118580  111957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 18:00:42.134869  111957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:00:42.150948  111957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 18:00:42.169421  111957 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 18:00:42.186352  111957 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 18:00:42.191342  111957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:00:42.332316  111957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:00:42.347734  111957 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405 for IP: 192.168.39.102
	I0729 18:00:42.347770  111957 certs.go:194] generating shared ca certs ...
	I0729 18:00:42.347792  111957 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:00:42.347969  111957 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 18:00:42.348060  111957 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 18:00:42.348081  111957 certs.go:256] generating profile certs ...
	I0729 18:00:42.348200  111957 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key
	I0729 18:00:42.348234  111957 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.730bf625
	I0729 18:00:42.348255  111957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.730bf625 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.62 192.168.39.185 192.168.39.254]
	I0729 18:00:42.546043  111957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.730bf625 ...
	I0729 18:00:42.546073  111957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.730bf625: {Name:mk5301d530a01d92ef5bab28ae80c6673c6ba236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:00:42.546247  111957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.730bf625 ...
	I0729 18:00:42.546259  111957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.730bf625: {Name:mk898eda88f9fdc9bcded3f5997d6f47978cfb97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:00:42.546328  111957 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.730bf625 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt
	I0729 18:00:42.546478  111957 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.730bf625 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key
	I0729 18:00:42.546612  111957 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key
	I0729 18:00:42.546629  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:00:42.546643  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:00:42.546655  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:00:42.546666  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:00:42.546679  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:00:42.546691  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:00:42.546703  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:00:42.546714  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:00:42.546771  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 18:00:42.546799  111957 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 18:00:42.546809  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:00:42.546836  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:00:42.546860  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:00:42.546880  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 18:00:42.546921  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:00:42.546951  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem -> /usr/share/ca-certificates/95282.pem
	I0729 18:00:42.546965  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /usr/share/ca-certificates/952822.pem
	I0729 18:00:42.546977  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:00:42.547539  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:00:42.572631  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:00:42.596920  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:00:42.621420  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:00:42.646491  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:00:42.670020  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:00:42.694118  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:00:42.719018  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:00:42.742812  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 18:00:42.766517  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 18:00:42.789686  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:00:42.813798  111957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:00:42.830300  111957 ssh_runner.go:195] Run: openssl version
	I0729 18:00:42.836403  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 18:00:42.847285  111957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 18:00:42.851729  111957 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:00:42.851772  111957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 18:00:42.857305  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:00:42.866656  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:00:42.877340  111957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:00:42.881654  111957 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:00:42.881717  111957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:00:42.888664  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:00:42.899251  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 18:00:42.910295  111957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 18:00:42.915048  111957 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:00:42.915096  111957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 18:00:42.920927  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 18:00:42.930524  111957 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:00:42.934909  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:00:42.940428  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:00:42.946762  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:00:42.952036  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:00:42.957559  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:00:42.962949  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:00:42.968378  111957 kubeadm.go:392] StartCluster: {Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.179 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:00:42.968516  111957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:00:42.968565  111957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:00:43.004684  111957 cri.go:89] found id: "55137b5d553c696c4ffdc76b20bdcde1fb2f35602b34e8d264c1438a368c4f42"
	I0729 18:00:43.004708  111957 cri.go:89] found id: "51fad57579f21b5a011457c5c739093243b5f3b431b98db8b3b8f92ac916c53d"
	I0729 18:00:43.004714  111957 cri.go:89] found id: "bb65af2e22c6c1281cad453043b942b0fe5f6f716984cf6ab1a92f89ab851ea9"
	I0729 18:00:43.004718  111957 cri.go:89] found id: "b36f95de1e765db7360f4c567999293aceaf13ae2301b194f86db86199e2fd58"
	I0729 18:00:43.004722  111957 cri.go:89] found id: "34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca"
	I0729 18:00:43.004727  111957 cri.go:89] found id: "11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d"
	I0729 18:00:43.004731  111957 cri.go:89] found id: "5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5"
	I0729 18:00:43.004734  111957 cri.go:89] found id: "2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f"
	I0729 18:00:43.004737  111957 cri.go:89] found id: "83c7e5300596ed794752e31ceb8cb03e339b3ea52305f02c83e577378000130f"
	I0729 18:00:43.004745  111957 cri.go:89] found id: "152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a"
	I0729 18:00:43.004748  111957 cri.go:89] found id: "985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e"
	I0729 18:00:43.004750  111957 cri.go:89] found id: "fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25"
	I0729 18:00:43.004753  111957 cri.go:89] found id: "e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e"
	I0729 18:00:43.004757  111957 cri.go:89] found id: ""
	I0729 18:00:43.004810  111957 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.709254882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276199709230584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3c566ef-e24f-422d-9139-fb20c0c7b1f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.709881844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78341e16-b273-4f34-b9a9-cd5204e12e11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.710000105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78341e16-b273-4f34-b9a9-cd5204e12e11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.710556727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02bdfc68d9f617cb2aad358d4bd0e4768765ad7a78e69de442e0a2a7b9e481e7,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276106500049710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b09eb16bdfe9b6222221af56f9412362f12c483a4f77154ecc705b5c7446d76,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722276090504934171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eb8375f5352d20d949845ff3b0b2c7daa0aae371096814bf64f44c7d52ed79,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722276082502335996,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df50cbfea05aafd3a106c6ea17ae0d08034a824cbe3f7ca66fc3e1dd432c3285,PodSandboxId:27519eeed1ad345199442cc5514bfaec3420dc11a5be9e87d038b1aa9043e3b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722276078775418847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b1d684046b21ac59ccf46553ab97cb19d10497274411a8820a605f8f912558,PodSandboxId:bd474b6e52dd8f4258494c02353a9706bf4a115ac08edc9fda4665cfe0d21d16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722276057314132648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e018d547ebb2f3d9e79e0b37116ab4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c70a5ed56917c765a69bdfe93f88f8dcab61db45495c6865aec4a2e5fa2d0,PodSandboxId:6adda675354bdd09ae1a31cdcaf4432f224e4346f92899e8f69f8330d4119235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722276045685227933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276045706163534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a58f0d56b3f0a58524ef65306b9ede0fb87c7c9b3fbd2288bffc08a5348a9ae2,PodSandboxId:594dffd8c666427c9f0b82e80bdf51ba17082f244fb2d95a528ab0e7db628de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045653993704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbab85d8958b230048a5f45b40bce6e9b85e71b81ec97b2ea07de656b74bc98,PodSandboxId:364906e40ee243740f44e0b1115eba9f7096cbbecde537d539257461a1a2aaf0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722276045539109692,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426ab87d58d7f19613e1653e84994b65699a76b375341068d0de90ba2bd56b1a,PodSandboxId:755e87bbe77b8a6dfcf12c237786a46b62272bbb1859cbe73b505d887f36177d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045638685182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da050dd3d84fd7b5418eca925ae183e7d0f42e560d64a9e2cd6ff830ca57e2b,PodSandboxId:0952300a118864972c95ae0c0c7728d1d8363fc27b640cd2dbe8d9f66d92af0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722276045458799949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc14f09da5ac482d6b2832f18bc5db12bd83b21f6e619f8e9df03c691c6ce88,PodSandboxId:5f94d6def70c976cd013e8895da23850086d4cd552edcaf2b6131cb4d4305cc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722276045423949312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92
749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722276045331933115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e390
55d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722276045213219904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275551525114402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416642746499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernet
es.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416596729390,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275404641609024,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275401434022832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275381004990011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275380962302482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78341e16-b273-4f34-b9a9-cd5204e12e11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.753274068Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a882a99d-4697-4461-ad60-85adcaa6ff71 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.753549923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a882a99d-4697-4461-ad60-85adcaa6ff71 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.757611289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99fd2180-9294-426b-a9ae-f932233a57c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.758153612Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276199758127408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99fd2180-9294-426b-a9ae-f932233a57c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.758760703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc735101-96ee-42ab-b401-2fbee71b843a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.758815958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc735101-96ee-42ab-b401-2fbee71b843a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.759276009Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02bdfc68d9f617cb2aad358d4bd0e4768765ad7a78e69de442e0a2a7b9e481e7,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276106500049710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b09eb16bdfe9b6222221af56f9412362f12c483a4f77154ecc705b5c7446d76,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722276090504934171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eb8375f5352d20d949845ff3b0b2c7daa0aae371096814bf64f44c7d52ed79,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722276082502335996,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df50cbfea05aafd3a106c6ea17ae0d08034a824cbe3f7ca66fc3e1dd432c3285,PodSandboxId:27519eeed1ad345199442cc5514bfaec3420dc11a5be9e87d038b1aa9043e3b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722276078775418847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b1d684046b21ac59ccf46553ab97cb19d10497274411a8820a605f8f912558,PodSandboxId:bd474b6e52dd8f4258494c02353a9706bf4a115ac08edc9fda4665cfe0d21d16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722276057314132648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e018d547ebb2f3d9e79e0b37116ab4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c70a5ed56917c765a69bdfe93f88f8dcab61db45495c6865aec4a2e5fa2d0,PodSandboxId:6adda675354bdd09ae1a31cdcaf4432f224e4346f92899e8f69f8330d4119235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722276045685227933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276045706163534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a58f0d56b3f0a58524ef65306b9ede0fb87c7c9b3fbd2288bffc08a5348a9ae2,PodSandboxId:594dffd8c666427c9f0b82e80bdf51ba17082f244fb2d95a528ab0e7db628de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045653993704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbab85d8958b230048a5f45b40bce6e9b85e71b81ec97b2ea07de656b74bc98,PodSandboxId:364906e40ee243740f44e0b1115eba9f7096cbbecde537d539257461a1a2aaf0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722276045539109692,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426ab87d58d7f19613e1653e84994b65699a76b375341068d0de90ba2bd56b1a,PodSandboxId:755e87bbe77b8a6dfcf12c237786a46b62272bbb1859cbe73b505d887f36177d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045638685182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da050dd3d84fd7b5418eca925ae183e7d0f42e560d64a9e2cd6ff830ca57e2b,PodSandboxId:0952300a118864972c95ae0c0c7728d1d8363fc27b640cd2dbe8d9f66d92af0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722276045458799949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc14f09da5ac482d6b2832f18bc5db12bd83b21f6e619f8e9df03c691c6ce88,PodSandboxId:5f94d6def70c976cd013e8895da23850086d4cd552edcaf2b6131cb4d4305cc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722276045423949312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92
749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722276045331933115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e390
55d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722276045213219904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275551525114402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416642746499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernet
es.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416596729390,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275404641609024,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275401434022832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275381004990011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275380962302482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc735101-96ee-42ab-b401-2fbee71b843a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.803479762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4cdcb51-a214-49d4-9424-828d5b3f1094 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.803552016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4cdcb51-a214-49d4-9424-828d5b3f1094 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.804723856Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b178cdf-8223-4f6c-8f14-9e42a475e4cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.805247705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276199805221878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b178cdf-8223-4f6c-8f14-9e42a475e4cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.805792856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e4bfdd6-415c-41d5-b92e-22619b575741 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.805865957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e4bfdd6-415c-41d5-b92e-22619b575741 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.806321003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02bdfc68d9f617cb2aad358d4bd0e4768765ad7a78e69de442e0a2a7b9e481e7,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276106500049710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b09eb16bdfe9b6222221af56f9412362f12c483a4f77154ecc705b5c7446d76,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722276090504934171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eb8375f5352d20d949845ff3b0b2c7daa0aae371096814bf64f44c7d52ed79,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722276082502335996,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df50cbfea05aafd3a106c6ea17ae0d08034a824cbe3f7ca66fc3e1dd432c3285,PodSandboxId:27519eeed1ad345199442cc5514bfaec3420dc11a5be9e87d038b1aa9043e3b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722276078775418847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b1d684046b21ac59ccf46553ab97cb19d10497274411a8820a605f8f912558,PodSandboxId:bd474b6e52dd8f4258494c02353a9706bf4a115ac08edc9fda4665cfe0d21d16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722276057314132648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e018d547ebb2f3d9e79e0b37116ab4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c70a5ed56917c765a69bdfe93f88f8dcab61db45495c6865aec4a2e5fa2d0,PodSandboxId:6adda675354bdd09ae1a31cdcaf4432f224e4346f92899e8f69f8330d4119235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722276045685227933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276045706163534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a58f0d56b3f0a58524ef65306b9ede0fb87c7c9b3fbd2288bffc08a5348a9ae2,PodSandboxId:594dffd8c666427c9f0b82e80bdf51ba17082f244fb2d95a528ab0e7db628de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045653993704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbab85d8958b230048a5f45b40bce6e9b85e71b81ec97b2ea07de656b74bc98,PodSandboxId:364906e40ee243740f44e0b1115eba9f7096cbbecde537d539257461a1a2aaf0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722276045539109692,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426ab87d58d7f19613e1653e84994b65699a76b375341068d0de90ba2bd56b1a,PodSandboxId:755e87bbe77b8a6dfcf12c237786a46b62272bbb1859cbe73b505d887f36177d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045638685182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da050dd3d84fd7b5418eca925ae183e7d0f42e560d64a9e2cd6ff830ca57e2b,PodSandboxId:0952300a118864972c95ae0c0c7728d1d8363fc27b640cd2dbe8d9f66d92af0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722276045458799949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc14f09da5ac482d6b2832f18bc5db12bd83b21f6e619f8e9df03c691c6ce88,PodSandboxId:5f94d6def70c976cd013e8895da23850086d4cd552edcaf2b6131cb4d4305cc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722276045423949312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92
749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722276045331933115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e390
55d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722276045213219904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275551525114402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416642746499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernet
es.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416596729390,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275404641609024,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275401434022832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275381004990011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275380962302482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e4bfdd6-415c-41d5-b92e-22619b575741 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.848818295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=873fe223-95ba-42bd-9cc4-7a85b2e85732 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.848893303Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=873fe223-95ba-42bd-9cc4-7a85b2e85732 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.850050096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62102b3d-0be5-42d1-ae90-4f83067301de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.850546445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276199850522289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62102b3d-0be5-42d1-ae90-4f83067301de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.851197701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5727b9a3-ec6c-4d32-b615-4ef35fff95bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.851263530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5727b9a3-ec6c-4d32-b615-4ef35fff95bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:03:19 ha-794405 crio[3918]: time="2024-07-29 18:03:19.851936812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02bdfc68d9f617cb2aad358d4bd0e4768765ad7a78e69de442e0a2a7b9e481e7,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276106500049710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b09eb16bdfe9b6222221af56f9412362f12c483a4f77154ecc705b5c7446d76,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722276090504934171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eb8375f5352d20d949845ff3b0b2c7daa0aae371096814bf64f44c7d52ed79,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722276082502335996,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df50cbfea05aafd3a106c6ea17ae0d08034a824cbe3f7ca66fc3e1dd432c3285,PodSandboxId:27519eeed1ad345199442cc5514bfaec3420dc11a5be9e87d038b1aa9043e3b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722276078775418847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b1d684046b21ac59ccf46553ab97cb19d10497274411a8820a605f8f912558,PodSandboxId:bd474b6e52dd8f4258494c02353a9706bf4a115ac08edc9fda4665cfe0d21d16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722276057314132648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e018d547ebb2f3d9e79e0b37116ab4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c70a5ed56917c765a69bdfe93f88f8dcab61db45495c6865aec4a2e5fa2d0,PodSandboxId:6adda675354bdd09ae1a31cdcaf4432f224e4346f92899e8f69f8330d4119235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722276045685227933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276045706163534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a58f0d56b3f0a58524ef65306b9ede0fb87c7c9b3fbd2288bffc08a5348a9ae2,PodSandboxId:594dffd8c666427c9f0b82e80bdf51ba17082f244fb2d95a528ab0e7db628de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045653993704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbab85d8958b230048a5f45b40bce6e9b85e71b81ec97b2ea07de656b74bc98,PodSandboxId:364906e40ee243740f44e0b1115eba9f7096cbbecde537d539257461a1a2aaf0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722276045539109692,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426ab87d58d7f19613e1653e84994b65699a76b375341068d0de90ba2bd56b1a,PodSandboxId:755e87bbe77b8a6dfcf12c237786a46b62272bbb1859cbe73b505d887f36177d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045638685182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da050dd3d84fd7b5418eca925ae183e7d0f42e560d64a9e2cd6ff830ca57e2b,PodSandboxId:0952300a118864972c95ae0c0c7728d1d8363fc27b640cd2dbe8d9f66d92af0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722276045458799949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc14f09da5ac482d6b2832f18bc5db12bd83b21f6e619f8e9df03c691c6ce88,PodSandboxId:5f94d6def70c976cd013e8895da23850086d4cd552edcaf2b6131cb4d4305cc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722276045423949312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92
749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722276045331933115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e390
55d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722276045213219904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275551525114402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416642746499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernet
es.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416596729390,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275404641609024,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275401434022832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275381004990011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275380962302482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5727b9a3-ec6c-4d32-b615-4ef35fff95bf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	02bdfc68d9f61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   fe6509553c8c1       storage-provisioner
	3b09eb16bdfe9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   78ae745caafbb       kube-controller-manager-ha-794405
	45eb8375f5352       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   665b3ece6fea2       kube-apiserver-ha-794405
	df50cbfea05aa       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   27519eeed1ad3       busybox-fc5497c4f-9t4xg
	07b1d684046b2       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   bd474b6e52dd8       kube-vip-ha-794405
	5f9e8665b504b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   fe6509553c8c1       storage-provisioner
	3f8c70a5ed569       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   6adda675354bd       kube-proxy-llkz8
	a58f0d56b3f0a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   594dffd8c6664       coredns-7db6d8ff4d-nzvff
	426ab87d58d7f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   755e87bbe77b8       coredns-7db6d8ff4d-bb2jg
	7dbab85d8958b       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   364906e40ee24       kindnet-j4l89
	8da050dd3d84f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   0952300a11886       etcd-ha-794405
	3fc14f09da5ac       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   5f94d6def70c9       kube-scheduler-ha-794405
	ad32ae050fd04       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   78ae745caafbb       kube-controller-manager-ha-794405
	b81d1356b5384       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   665b3ece6fea2       kube-apiserver-ha-794405
	882dc7ddd36ca       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   030fd183fc5d7       busybox-fc5497c4f-9t4xg
	34646ba311f51       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   0a85f31b7216e       coredns-7db6d8ff4d-nzvff
	11e098645d7d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   c21b66fe5a20a       coredns-7db6d8ff4d-bb2jg
	5005f4869048e       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   a04c14b520cac       kindnet-j4l89
	2992a8242c5e7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   afea598394fc6       kube-proxy-llkz8
	fca3429715988       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   da888d4d893d6       kube-scheduler-ha-794405
	e224997d35927       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   a93bf9947672a       etcd-ha-794405
	
	
	==> coredns [11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d] <==
	[INFO] 10.244.0.4:57455 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010734s
	[INFO] 10.244.0.4:49757 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134817s
	[INFO] 10.244.0.4:34537 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083091s
	[INFO] 10.244.0.4:59243 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094884s
	[INFO] 10.244.0.4:32813 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194094s
	[INFO] 10.244.1.2:51380 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001717695s
	[INFO] 10.244.1.2:41977 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084863s
	[INFO] 10.244.1.2:45990 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090641s
	[INFO] 10.244.1.2:55905 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128239s
	[INFO] 10.244.1.2:57839 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092047s
	[INFO] 10.244.0.4:52553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155036s
	[INFO] 10.244.0.4:60833 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116165s
	[INFO] 10.244.0.4:58984 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096169s
	[INFO] 10.244.1.2:56581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099926s
	[INFO] 10.244.2.2:47299 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251364s
	[INFO] 10.244.2.2:54140 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131767s
	[INFO] 10.244.0.4:37906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128168s
	[INFO] 10.244.0.4:53897 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128545s
	[INFO] 10.244.0.4:42232 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175859s
	[INFO] 10.244.1.2:58375 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000225865s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca] <==
	[INFO] 10.244.0.4:49557 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090727s
	[INFO] 10.244.0.4:33820 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001835803s
	[INFO] 10.244.0.4:39762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001456019s
	[INFO] 10.244.1.2:49407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010484s
	[INFO] 10.244.1.2:41901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153055s
	[INFO] 10.244.1.2:46891 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001271955s
	[INFO] 10.244.2.2:49560 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127808s
	[INFO] 10.244.2.2:56119 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007809s
	[INFO] 10.244.2.2:38291 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002272s
	[INFO] 10.244.2.2:47373 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074396s
	[INFO] 10.244.0.4:48660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051359s
	[INFO] 10.244.1.2:45618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016309s
	[INFO] 10.244.1.2:34022 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090959s
	[INFO] 10.244.1.2:55925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000187604s
	[INFO] 10.244.2.2:52948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132206s
	[INFO] 10.244.2.2:50512 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133066s
	[INFO] 10.244.0.4:56090 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011653s
	[INFO] 10.244.1.2:53420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109055s
	[INFO] 10.244.1.2:51296 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101897s
	[INFO] 10.244.1.2:36056 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072778s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [426ab87d58d7f19613e1653e84994b65699a76b375341068d0de90ba2bd56b1a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.8:44156->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[923007543]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:00:57.195) (total time: 10679ms):
	Trace[923007543]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.8:44156->10.96.0.1:443: read: connection reset by peer 10675ms (18:01:07.871)
	Trace[923007543]: [10.679405952s] [10.679405952s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.8:44156->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a58f0d56b3f0a58524ef65306b9ede0fb87c7c9b3fbd2288bffc08a5348a9ae2] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:34654->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:34654->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:34660->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:34660->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-794405
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_49_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:49:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:03:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:01:32 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:01:32 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:01:32 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:01:32 +0000   Mon, 29 Jul 2024 17:50:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-794405
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f5d049fcd1645d38ff56c6e587d83f8
	  System UUID:                4f5d049f-cd16-45d3-8ff5-6c6e587d83f8
	  Boot ID:                    a36bbb12-7ddf-423d-b68c-d781a4b4af75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9t4xg              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-bb2jg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-nzvff             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-794405                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-j4l89                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-794405             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-794405    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-llkz8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-794405             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-794405                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 114s   kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-794405 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-794405 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-794405 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-794405 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal   RegisteredNode           11m    node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Warning  ContainerGCFailed        3m33s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           104s   node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal   RegisteredNode           98s    node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal   RegisteredNode           31s    node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	
	
	Name:               ha-794405-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_50_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:50:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:03:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:02:11 +0000   Mon, 29 Jul 2024 18:01:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:02:11 +0000   Mon, 29 Jul 2024 18:01:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:02:11 +0000   Mon, 29 Jul 2024 18:01:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:02:11 +0000   Mon, 29 Jul 2024 18:01:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-794405-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 437dda8ebd384bf294c14831928d98f5
	  System UUID:                437dda8e-bd38-4bf2-94c1-4831928d98f5
	  Boot ID:                    c1b6964d-d82c-4781-a4fc-aca957036bf0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kq6g2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-794405-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-8qgq5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-794405-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-794405-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-qcmxl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-794405-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-794405-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-794405-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-794405-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-794405-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  NodeNotReady             8m55s                  node-controller  Node ha-794405-m02 status is now: NodeNotReady
	  Normal  Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m17s (x8 over 2m18s)  kubelet          Node ha-794405-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s (x8 over 2m18s)  kubelet          Node ha-794405-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s (x7 over 2m18s)  kubelet          Node ha-794405-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           104s                   node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           98s                    node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           31s                    node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	
	
	Name:               ha-794405-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_52_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:52:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:03:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:02:48 +0000   Mon, 29 Jul 2024 18:02:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:02:48 +0000   Mon, 29 Jul 2024 18:02:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:02:48 +0000   Mon, 29 Jul 2024 18:02:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:02:48 +0000   Mon, 29 Jul 2024 18:02:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-794405-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7788bd32e72d421d86476277253535d2
	  System UUID:                7788bd32-e72d-421d-8647-6277253535d2
	  Boot ID:                    303c6fe9-8a3d-4d49-90d1-c6ab89822e0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8xr2r                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-794405-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-g2qqp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-794405-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-794405-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-ndmlm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-794405-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-794405-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 46s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-794405-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-794405-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-794405-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	  Normal   RegisteredNode           104s               node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-794405-m03 status is now: NodeNotReady
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s (x2 over 62s)  kubelet          Node ha-794405-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x2 over 62s)  kubelet          Node ha-794405-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x2 over 62s)  kubelet          Node ha-794405-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 62s                kubelet          Node ha-794405-m03 has been rebooted, boot id: 303c6fe9-8a3d-4d49-90d1-c6ab89822e0c
	  Normal   NodeReady                62s                kubelet          Node ha-794405-m03 status is now: NodeReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-794405-m03 event: Registered Node ha-794405-m03 in Controller
	
	
	Name:               ha-794405-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_53_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:03:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:03:12 +0000   Mon, 29 Jul 2024 18:03:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:03:12 +0000   Mon, 29 Jul 2024 18:03:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:03:12 +0000   Mon, 29 Jul 2024 18:03:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:03:12 +0000   Mon, 29 Jul 2024 18:03:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    ha-794405-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2eee0b726b504b318de9dcda1a6d7202
	  System UUID:                2eee0b72-6b50-4b31-8de9-dcda1a6d7202
	  Boot ID:                    ed914ce7-3f75-4141-a6a5-d94ed455ac91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ndgvz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-nrw9z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-794405-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-794405-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-794405-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   NodeReady                9m57s              kubelet          Node ha-794405-m04 status is now: NodeReady
	  Normal   RegisteredNode           104s               node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-794405-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-794405-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-794405-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-794405-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-794405-m04 has been rebooted, boot id: ed914ce7-3f75-4141-a6a5-d94ed455ac91
	  Normal   NodeReady                8s                 kubelet          Node ha-794405-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[ +12.653696] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.053781] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058152] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.186373] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.123683] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.267498] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.093512] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.553872] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.061033] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.996135] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.105049] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[Jul29 17:50] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.275633] kauditd_printk_skb: 38 callbacks suppressed
	[ +40.101588] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 17:57] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 18:00] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[  +0.154635] systemd-fstab-generator[3851]: Ignoring "noauto" option for root device
	[  +0.180756] systemd-fstab-generator[3865]: Ignoring "noauto" option for root device
	[  +0.149352] systemd-fstab-generator[3877]: Ignoring "noauto" option for root device
	[  +0.279379] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[  +0.817780] systemd-fstab-generator[4028]: Ignoring "noauto" option for root device
	[  +2.805465] kauditd_printk_skb: 138 callbacks suppressed
	[ +12.140274] kauditd_printk_skb: 81 callbacks suppressed
	[Jul29 18:01] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.872398] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [8da050dd3d84fd7b5418eca925ae183e7d0f42e560d64a9e2cd6ff830ca57e2b] <==
	{"level":"warn","ts":"2024-07-29T18:02:13.11944Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.185:2380/version","remote-member-id":"42cea3ee7cfe51fc","error":"Get \"https://192.168.39.185:2380/version\": dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:13.119548Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"42cea3ee7cfe51fc","error":"Get \"https://192.168.39.185:2380/version\": dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:16.391388Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"42cea3ee7cfe51fc","rtt":"0s","error":"dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:16.391446Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"42cea3ee7cfe51fc","rtt":"0s","error":"dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:17.122837Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.185:2380/version","remote-member-id":"42cea3ee7cfe51fc","error":"Get \"https://192.168.39.185:2380/version\": dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:17.122891Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"42cea3ee7cfe51fc","error":"Get \"https://192.168.39.185:2380/version\": dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:21.124793Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.185:2380/version","remote-member-id":"42cea3ee7cfe51fc","error":"Get \"https://192.168.39.185:2380/version\": dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:21.12485Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"42cea3ee7cfe51fc","error":"Get \"https://192.168.39.185:2380/version\": dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:21.392568Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"42cea3ee7cfe51fc","rtt":"0s","error":"dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:21.392623Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"42cea3ee7cfe51fc","rtt":"0s","error":"dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:25.126668Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.185:2380/version","remote-member-id":"42cea3ee7cfe51fc","error":"Get \"https://192.168.39.185:2380/version\": dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:25.126753Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"42cea3ee7cfe51fc","error":"Get \"https://192.168.39.185:2380/version\": dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:26.393099Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"42cea3ee7cfe51fc","rtt":"0s","error":"dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:26.393546Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"42cea3ee7cfe51fc","rtt":"0s","error":"dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:29.129116Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.185:2380/version","remote-member-id":"42cea3ee7cfe51fc","error":"Get \"https://192.168.39.185:2380/version\": dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:29.129238Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"42cea3ee7cfe51fc","error":"Get \"https://192.168.39.185:2380/version\": dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T18:02:30.518456Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T18:02:30.522727Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T18:02:30.522789Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T18:02:30.543262Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b93c4bc4617b0fe","to":"42cea3ee7cfe51fc","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T18:02:30.543423Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T18:02:30.54875Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b93c4bc4617b0fe","to":"42cea3ee7cfe51fc","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T18:02:30.548871Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"warn","ts":"2024-07-29T18:02:31.393664Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"42cea3ee7cfe51fc","rtt":"0s","error":"dial tcp 192.168.39.185:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:02:31.393854Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"42cea3ee7cfe51fc","rtt":"0s","error":"dial tcp 192.168.39.185:2380: connect: connection refused"}
	
	
	==> etcd [e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e] <==
	{"level":"warn","ts":"2024-07-29T17:59:09.011874Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:59:08.136133Z","time spent":"875.736693ms","remote":"127.0.0.1:56404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:500 "}
	2024/07/29 17:59:09 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T17:59:09.011885Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:59:01.305811Z","time spent":"7.706071468s","remote":"127.0.0.1:56028","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" "}
	2024/07/29 17:59:09 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 17:59:09 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T17:59:09.03728Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.102:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:59:09.037429Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.102:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T17:59:09.037528Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b93c4bc4617b0fe","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T17:59:09.037681Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.037738Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.037807Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.037929Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.037987Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.038077Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.038106Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.03813Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038176Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038213Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038314Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038459Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038526Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038557Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.041703Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-07-29T17:59:09.041842Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-07-29T17:59:09.04188Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-794405","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.102:2380"],"advertise-client-urls":["https://192.168.39.102:2379"]}
	
	
	==> kernel <==
	 18:03:20 up 14 min,  0 users,  load average: 0.64, 0.64, 0.38
	Linux ha-794405 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5] <==
	I0729 17:58:45.707700       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:58:45.707724       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:58:45.707991       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:58:45.708099       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 17:58:45.708193       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:58:45.708214       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:58:55.706494       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 17:58:55.706656       1 main.go:299] handling current node
	I0729 17:58:55.706700       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:58:55.706774       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:58:55.706941       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:58:55.706964       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 17:58:55.707026       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:58:55.707044       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:59:05.705991       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 17:59:05.706172       1 main.go:299] handling current node
	I0729 17:59:05.706226       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:59:05.706249       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:59:05.706476       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:59:05.706507       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 17:59:05.706591       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:59:05.706618       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	E0729 17:59:07.073634       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	W0729 17:59:08.997551       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	E0729 17:59:08.997690       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	
	
	==> kindnet [7dbab85d8958b230048a5f45b40bce6e9b85e71b81ec97b2ea07de656b74bc98] <==
	I0729 18:02:46.756545       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 18:02:56.761716       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 18:02:56.761884       1 main.go:299] handling current node
	I0729 18:02:56.761934       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 18:02:56.761987       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 18:02:56.762424       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 18:02:56.762462       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 18:02:56.762561       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 18:02:56.762585       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 18:03:06.761930       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 18:03:06.762429       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 18:03:06.762654       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 18:03:06.762689       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 18:03:06.762771       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 18:03:06.762797       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 18:03:06.762871       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 18:03:06.762921       1 main.go:299] handling current node
	I0729 18:03:16.757137       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 18:03:16.757277       1 main.go:299] handling current node
	I0729 18:03:16.757331       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 18:03:16.757418       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 18:03:16.757668       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 18:03:16.757858       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 18:03:16.758226       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 18:03:16.758301       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [45eb8375f5352d20d949845ff3b0b2c7daa0aae371096814bf64f44c7d52ed79] <==
	I0729 18:01:24.363491       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0729 18:01:24.363527       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 18:01:24.363670       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 18:01:24.443913       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 18:01:24.445081       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 18:01:24.451826       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 18:01:24.451892       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 18:01:24.452594       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 18:01:24.453192       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:01:24.453224       1 policy_source.go:224] refreshing policies
	I0729 18:01:24.453515       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 18:01:24.453603       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 18:01:24.453838       1 shared_informer.go:320] Caches are synced for configmaps
	W0729 18:01:24.461696       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.185]
	I0729 18:01:24.463090       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 18:01:24.464331       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 18:01:24.464507       1 aggregator.go:165] initial CRD sync complete...
	I0729 18:01:24.464549       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 18:01:24.464573       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 18:01:24.464596       1 cache.go:39] Caches are synced for autoregister controller
	I0729 18:01:24.469925       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 18:01:24.473563       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 18:01:24.538804       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 18:01:25.352470       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 18:01:26.106289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.185 192.168.39.62]
	
	
	==> kube-apiserver [b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475] <==
	I0729 18:00:46.109557       1 options.go:221] external host was not specified, using 192.168.39.102
	I0729 18:00:46.114327       1 server.go:148] Version: v1.30.3
	I0729 18:00:46.117239       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:00:46.860797       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 18:00:46.861628       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:00:46.864995       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 18:00:46.865091       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 18:00:46.865343       1 instance.go:299] Using reconciler: lease
	W0729 18:01:06.857867       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 18:01:06.857906       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 18:01:06.866043       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3b09eb16bdfe9b6222221af56f9412362f12c483a4f77154ecc705b5c7446d76] <==
	I0729 18:01:42.555553       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 18:01:42.572002       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 18:01:42.658024       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:01:42.683051       1 shared_informer.go:320] Caches are synced for taint
	I0729 18:01:42.683292       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 18:01:42.683516       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-794405"
	I0729 18:01:42.683570       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-794405-m02"
	I0729 18:01:42.683599       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-794405-m03"
	I0729 18:01:42.683639       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-794405-m04"
	I0729 18:01:42.683679       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 18:01:42.690995       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:01:43.068108       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 18:01:43.068150       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 18:01:43.118978       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 18:01:56.443232       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tb2fd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tb2fd\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 18:01:56.446642       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bf44f1dd-61f5-4415-97d9-857a3d6d41ec", APIVersion:"v1", ResourceVersion:"240", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tb2fd EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tb2fd": the object has been modified; please apply your changes to the latest version and try again
	I0729 18:01:56.465163       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.155152ms"
	I0729 18:01:56.465433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="154.154µs"
	I0729 18:02:16.198886       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-794405-m04"
	I0729 18:02:16.300340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.533742ms"
	I0729 18:02:16.300534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.206µs"
	I0729 18:02:19.222530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.48µs"
	I0729 18:02:37.679300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.941308ms"
	I0729 18:02:37.680271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.146µs"
	I0729 18:03:12.477966       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-794405-m04"
	
	
	==> kube-controller-manager [ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc] <==
	I0729 18:00:46.970802       1 serving.go:380] Generated self-signed cert in-memory
	I0729 18:00:47.558053       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 18:00:47.558135       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:00:47.560056       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 18:00:47.560205       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 18:00:47.560743       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 18:00:47.560859       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0729 18:01:07.873309       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.102:8443/healthz\": dial tcp 192.168.39.102:8443: connect: connection refused"
	
	
	==> kube-proxy [2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f] <==
	E0729 17:58:02.112048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:05.184528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:05.185580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:05.185522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:05.185637       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:05.185712       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:05.185801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:11.326895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:11.327082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:11.327125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:11.327143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:11.326985       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:11.327192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:20.542748       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:20.542931       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:23.615575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:23.615998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:23.616438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:23.616580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:45.119140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:45.119258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:45.119411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:45.119457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:51.263415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:51.263586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [3f8c70a5ed56917c765a69bdfe93f88f8dcab61db45495c6865aec4a2e5fa2d0] <==
	I0729 18:00:47.118991       1 server_linux.go:69] "Using iptables proxy"
	E0729 18:00:47.998627       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:00:51.071010       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:00:54.142190       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:01:00.286950       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:01:09.502907       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 18:01:25.689553       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	I0729 18:01:25.827983       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:01:25.828073       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:01:25.828092       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:01:25.834062       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:01:25.834855       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:01:25.835194       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:01:25.839591       1 config.go:192] "Starting service config controller"
	I0729 18:01:25.839689       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:01:25.839849       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:01:25.839960       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:01:25.843962       1 config.go:319] "Starting node config controller"
	I0729 18:01:25.844042       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:01:25.940691       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:01:25.941116       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:01:25.944091       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3fc14f09da5ac482d6b2832f18bc5db12bd83b21f6e619f8e9df03c691c6ce88] <==
	W0729 18:01:16.403533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.102:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:16.403597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.102:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:16.561922       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:16.561977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:16.959428       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.102:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:16.959481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.102:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.035340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.102:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.035466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.102:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.117643       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.102:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.117747       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.102:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.297826       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.297943       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.464535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.464602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.752042       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.102:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.752158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.102:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.864157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.864302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:24.369616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 18:01:24.370288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 18:01:24.370511       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 18:01:24.370917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 18:01:24.372571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 18:01:24.372663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0729 18:01:25.378802       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25] <==
	W0729 17:59:01.143231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 17:59:01.143337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 17:59:01.340223       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 17:59:01.340429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 17:59:01.895552       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 17:59:01.895606       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 17:59:02.017551       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:59:02.017621       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:59:02.218701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 17:59:02.218757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 17:59:02.357555       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 17:59:02.357602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 17:59:02.373847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 17:59:02.373975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 17:59:02.708542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 17:59:02.708684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 17:59:02.774851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 17:59:02.774904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 17:59:03.101865       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 17:59:03.101979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 17:59:03.203061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 17:59:03.203124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 17:59:03.545103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 17:59:03.545193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 17:59:08.968012       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 18:01:21 ha-794405 kubelet[1375]: I0729 18:01:21.789757    1375 status_manager.go:853] "Failed to get status for pod" podUID="27e018d547ebb2f3d9e79e0b37116ab4" pod="kube-system/kube-vip-ha-794405" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 18:01:21 ha-794405 kubelet[1375]: E0729 18:01:21.790307    1375 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-794405\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 18:01:21 ha-794405 kubelet[1375]: E0729 18:01:21.790467    1375 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 29 18:01:22 ha-794405 kubelet[1375]: I0729 18:01:22.489537    1375 scope.go:117] "RemoveContainer" containerID="5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827"
	Jul 29 18:01:22 ha-794405 kubelet[1375]: E0729 18:01:22.489718    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(0e08d093-f8b5-4614-9be2-5832f7cafa75)\"" pod="kube-system/storage-provisioner" podUID="0e08d093-f8b5-4614-9be2-5832f7cafa75"
	Jul 29 18:01:22 ha-794405 kubelet[1375]: I0729 18:01:22.489947    1375 scope.go:117] "RemoveContainer" containerID="b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475"
	Jul 29 18:01:24 ha-794405 kubelet[1375]: I0729 18:01:24.861674    1375 status_manager.go:853] "Failed to get status for pod" podUID="5110118fe5cf51b6a61d9f9785be3c3c" pod="kube-system/kube-apiserver-ha-794405" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 18:01:24 ha-794405 kubelet[1375]: W0729 18:01:24.861680    1375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 18:01:24 ha-794405 kubelet[1375]: E0729 18:01:24.862499    1375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1829": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 18:01:30 ha-794405 kubelet[1375]: I0729 18:01:30.489622    1375 scope.go:117] "RemoveContainer" containerID="ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc"
	Jul 29 18:01:34 ha-794405 kubelet[1375]: I0729 18:01:34.489570    1375 scope.go:117] "RemoveContainer" containerID="5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827"
	Jul 29 18:01:34 ha-794405 kubelet[1375]: E0729 18:01:34.489827    1375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(0e08d093-f8b5-4614-9be2-5832f7cafa75)\"" pod="kube-system/storage-provisioner" podUID="0e08d093-f8b5-4614-9be2-5832f7cafa75"
	Jul 29 18:01:46 ha-794405 kubelet[1375]: I0729 18:01:46.489628    1375 scope.go:117] "RemoveContainer" containerID="5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827"
	Jul 29 18:01:47 ha-794405 kubelet[1375]: E0729 18:01:47.517841    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:01:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:01:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:01:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:01:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:02:18 ha-794405 kubelet[1375]: I0729 18:02:18.489630    1375 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-794405" podUID="0e782ab8-0d52-4894-b003-493294ab4710"
	Jul 29 18:02:18 ha-794405 kubelet[1375]: I0729 18:02:18.511218    1375 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-794405"
	Jul 29 18:02:47 ha-794405 kubelet[1375]: E0729 18:02:47.515525    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:02:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:02:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:02:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:02:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:03:19.433263  113286 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19339-88081/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-794405 -n ha-794405
helpers_test.go:261: (dbg) Run:  kubectl --context ha-794405 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (375.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 stop -v=7 --alsologtostderr: exit status 82 (2m0.455558709s)

                                                
                                                
-- stdout --
	* Stopping node "ha-794405-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:03:39.090113  113691 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:03:39.090375  113691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:03:39.090385  113691 out.go:304] Setting ErrFile to fd 2...
	I0729 18:03:39.090391  113691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:03:39.090576  113691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:03:39.090840  113691 out.go:298] Setting JSON to false
	I0729 18:03:39.090934  113691 mustload.go:65] Loading cluster: ha-794405
	I0729 18:03:39.091280  113691 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:03:39.091380  113691 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 18:03:39.091574  113691 mustload.go:65] Loading cluster: ha-794405
	I0729 18:03:39.091725  113691 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:03:39.091766  113691 stop.go:39] StopHost: ha-794405-m04
	I0729 18:03:39.092211  113691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:03:39.092264  113691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:03:39.106910  113691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0729 18:03:39.107323  113691 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:03:39.107913  113691 main.go:141] libmachine: Using API Version  1
	I0729 18:03:39.107936  113691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:03:39.108293  113691 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:03:39.110422  113691 out.go:177] * Stopping node "ha-794405-m04"  ...
	I0729 18:03:39.111497  113691 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 18:03:39.111529  113691 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 18:03:39.111758  113691 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 18:03:39.111796  113691 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 18:03:39.114544  113691 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 18:03:39.115001  113691 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 19:03:07 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 18:03:39.115028  113691 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 18:03:39.115187  113691 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 18:03:39.115362  113691 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 18:03:39.115515  113691 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 18:03:39.115666  113691 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	I0729 18:03:39.195915  113691 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 18:03:39.249404  113691 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 18:03:39.301784  113691 main.go:141] libmachine: Stopping "ha-794405-m04"...
	I0729 18:03:39.301815  113691 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 18:03:39.303400  113691 main.go:141] libmachine: (ha-794405-m04) Calling .Stop
	I0729 18:03:39.306515  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 0/120
	I0729 18:03:40.308031  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 1/120
	I0729 18:03:41.309606  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 2/120
	I0729 18:03:42.310927  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 3/120
	I0729 18:03:43.312234  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 4/120
	I0729 18:03:44.314198  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 5/120
	I0729 18:03:45.315552  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 6/120
	I0729 18:03:46.317834  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 7/120
	I0729 18:03:47.319107  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 8/120
	I0729 18:03:48.321061  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 9/120
	I0729 18:03:49.323171  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 10/120
	I0729 18:03:50.324640  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 11/120
	I0729 18:03:51.326043  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 12/120
	I0729 18:03:52.327417  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 13/120
	I0729 18:03:53.328902  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 14/120
	I0729 18:03:54.330965  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 15/120
	I0729 18:03:55.332374  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 16/120
	I0729 18:03:56.333801  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 17/120
	I0729 18:03:57.335178  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 18/120
	I0729 18:03:58.336721  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 19/120
	I0729 18:03:59.338953  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 20/120
	I0729 18:04:00.341221  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 21/120
	I0729 18:04:01.342695  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 22/120
	I0729 18:04:02.343987  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 23/120
	I0729 18:04:03.345497  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 24/120
	I0729 18:04:04.347222  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 25/120
	I0729 18:04:05.348576  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 26/120
	I0729 18:04:06.349827  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 27/120
	I0729 18:04:07.351287  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 28/120
	I0729 18:04:08.352577  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 29/120
	I0729 18:04:09.354004  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 30/120
	I0729 18:04:10.355998  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 31/120
	I0729 18:04:11.357240  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 32/120
	I0729 18:04:12.358617  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 33/120
	I0729 18:04:13.360315  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 34/120
	I0729 18:04:14.362074  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 35/120
	I0729 18:04:15.363242  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 36/120
	I0729 18:04:16.364620  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 37/120
	I0729 18:04:17.366376  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 38/120
	I0729 18:04:18.367634  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 39/120
	I0729 18:04:19.369642  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 40/120
	I0729 18:04:20.371209  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 41/120
	I0729 18:04:21.372641  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 42/120
	I0729 18:04:22.374061  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 43/120
	I0729 18:04:23.375387  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 44/120
	I0729 18:04:24.376847  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 45/120
	I0729 18:04:25.378116  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 46/120
	I0729 18:04:26.379571  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 47/120
	I0729 18:04:27.381181  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 48/120
	I0729 18:04:28.383267  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 49/120
	I0729 18:04:29.385265  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 50/120
	I0729 18:04:30.386429  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 51/120
	I0729 18:04:31.387735  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 52/120
	I0729 18:04:32.389372  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 53/120
	I0729 18:04:33.390675  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 54/120
	I0729 18:04:34.392017  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 55/120
	I0729 18:04:35.393839  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 56/120
	I0729 18:04:36.395028  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 57/120
	I0729 18:04:37.396278  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 58/120
	I0729 18:04:38.397852  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 59/120
	I0729 18:04:39.399807  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 60/120
	I0729 18:04:40.401203  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 61/120
	I0729 18:04:41.402486  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 62/120
	I0729 18:04:42.403910  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 63/120
	I0729 18:04:43.405295  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 64/120
	I0729 18:04:44.407247  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 65/120
	I0729 18:04:45.408629  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 66/120
	I0729 18:04:46.409847  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 67/120
	I0729 18:04:47.411220  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 68/120
	I0729 18:04:48.413279  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 69/120
	I0729 18:04:49.414800  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 70/120
	I0729 18:04:50.416228  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 71/120
	I0729 18:04:51.417475  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 72/120
	I0729 18:04:52.419436  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 73/120
	I0729 18:04:53.420763  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 74/120
	I0729 18:04:54.422813  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 75/120
	I0729 18:04:55.424164  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 76/120
	I0729 18:04:56.425750  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 77/120
	I0729 18:04:57.427346  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 78/120
	I0729 18:04:58.428666  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 79/120
	I0729 18:04:59.430376  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 80/120
	I0729 18:05:00.432294  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 81/120
	I0729 18:05:01.433679  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 82/120
	I0729 18:05:02.435297  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 83/120
	I0729 18:05:03.436576  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 84/120
	I0729 18:05:04.438647  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 85/120
	I0729 18:05:05.440069  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 86/120
	I0729 18:05:06.441435  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 87/120
	I0729 18:05:07.443517  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 88/120
	I0729 18:05:08.444750  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 89/120
	I0729 18:05:09.446595  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 90/120
	I0729 18:05:10.448281  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 91/120
	I0729 18:05:11.449591  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 92/120
	I0729 18:05:12.451327  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 93/120
	I0729 18:05:13.452952  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 94/120
	I0729 18:05:14.454973  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 95/120
	I0729 18:05:15.457092  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 96/120
	I0729 18:05:16.458419  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 97/120
	I0729 18:05:17.459728  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 98/120
	I0729 18:05:18.461187  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 99/120
	I0729 18:05:19.463280  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 100/120
	I0729 18:05:20.464797  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 101/120
	I0729 18:05:21.466096  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 102/120
	I0729 18:05:22.467518  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 103/120
	I0729 18:05:23.469295  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 104/120
	I0729 18:05:24.471205  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 105/120
	I0729 18:05:25.472558  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 106/120
	I0729 18:05:26.473790  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 107/120
	I0729 18:05:27.475169  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 108/120
	I0729 18:05:28.476769  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 109/120
	I0729 18:05:29.478878  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 110/120
	I0729 18:05:30.481016  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 111/120
	I0729 18:05:31.482236  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 112/120
	I0729 18:05:32.483659  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 113/120
	I0729 18:05:33.484960  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 114/120
	I0729 18:05:34.486756  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 115/120
	I0729 18:05:35.488013  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 116/120
	I0729 18:05:36.489470  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 117/120
	I0729 18:05:37.490743  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 118/120
	I0729 18:05:38.492036  113691 main.go:141] libmachine: (ha-794405-m04) Waiting for machine to stop 119/120
	I0729 18:05:39.493530  113691 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 18:05:39.493609  113691 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 18:05:39.495339  113691 out.go:177] 
	W0729 18:05:39.496573  113691 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 18:05:39.496587  113691 out.go:239] * 
	* 
	W0729 18:05:39.499700  113691 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:05:39.501034  113691 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-794405 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
E0729 18:05:53.333958   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr: exit status 3 (18.970933991s)

                                                
                                                
-- stdout --
	ha-794405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794405-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:05:39.549343  114115 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:05:39.549604  114115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:05:39.549615  114115 out.go:304] Setting ErrFile to fd 2...
	I0729 18:05:39.549621  114115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:05:39.549842  114115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:05:39.550020  114115 out.go:298] Setting JSON to false
	I0729 18:05:39.550052  114115 mustload.go:65] Loading cluster: ha-794405
	I0729 18:05:39.550174  114115 notify.go:220] Checking for updates...
	I0729 18:05:39.550458  114115 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:05:39.550478  114115 status.go:255] checking status of ha-794405 ...
	I0729 18:05:39.550885  114115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:05:39.550939  114115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:05:39.576095  114115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0729 18:05:39.576604  114115 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:05:39.577294  114115 main.go:141] libmachine: Using API Version  1
	I0729 18:05:39.577320  114115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:05:39.577772  114115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:05:39.578004  114115 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 18:05:39.579763  114115 status.go:330] ha-794405 host status = "Running" (err=<nil>)
	I0729 18:05:39.579798  114115 host.go:66] Checking if "ha-794405" exists ...
	I0729 18:05:39.580107  114115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:05:39.580171  114115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:05:39.595530  114115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44547
	I0729 18:05:39.595994  114115 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:05:39.596494  114115 main.go:141] libmachine: Using API Version  1
	I0729 18:05:39.596523  114115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:05:39.596841  114115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:05:39.597056  114115 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 18:05:39.599855  114115 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:05:39.600329  114115 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:05:39.600359  114115 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:05:39.600491  114115 host.go:66] Checking if "ha-794405" exists ...
	I0729 18:05:39.600907  114115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:05:39.600951  114115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:05:39.616357  114115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34879
	I0729 18:05:39.616760  114115 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:05:39.617269  114115 main.go:141] libmachine: Using API Version  1
	I0729 18:05:39.617293  114115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:05:39.617618  114115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:05:39.617839  114115 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:05:39.618111  114115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:05:39.618147  114115 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:05:39.620910  114115 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:05:39.621329  114115 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:05:39.621355  114115 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:05:39.621464  114115 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:05:39.621624  114115 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:05:39.621788  114115 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:05:39.621946  114115 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 18:05:39.708003  114115 ssh_runner.go:195] Run: systemctl --version
	I0729 18:05:39.714820  114115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:05:39.731933  114115 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 18:05:39.731963  114115 api_server.go:166] Checking apiserver status ...
	I0729 18:05:39.731995  114115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:05:39.750883  114115 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5156/cgroup
	W0729 18:05:39.764283  114115 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5156/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:05:39.764329  114115 ssh_runner.go:195] Run: ls
	I0729 18:05:39.768897  114115 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:05:39.775236  114115 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:05:39.775261  114115 status.go:422] ha-794405 apiserver status = Running (err=<nil>)
	I0729 18:05:39.775273  114115 status.go:257] ha-794405 status: &{Name:ha-794405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:05:39.775298  114115 status.go:255] checking status of ha-794405-m02 ...
	I0729 18:05:39.775673  114115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:05:39.775709  114115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:05:39.790513  114115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38757
	I0729 18:05:39.790895  114115 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:05:39.791429  114115 main.go:141] libmachine: Using API Version  1
	I0729 18:05:39.791454  114115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:05:39.791749  114115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:05:39.791935  114115 main.go:141] libmachine: (ha-794405-m02) Calling .GetState
	I0729 18:05:39.793440  114115 status.go:330] ha-794405-m02 host status = "Running" (err=<nil>)
	I0729 18:05:39.793461  114115 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 18:05:39.793898  114115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:05:39.793941  114115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:05:39.808275  114115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0729 18:05:39.808619  114115 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:05:39.809025  114115 main.go:141] libmachine: Using API Version  1
	I0729 18:05:39.809049  114115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:05:39.809370  114115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:05:39.809546  114115 main.go:141] libmachine: (ha-794405-m02) Calling .GetIP
	I0729 18:05:39.812276  114115 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 18:05:39.812731  114115 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 19:00:53 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 18:05:39.812767  114115 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 18:05:39.812893  114115 host.go:66] Checking if "ha-794405-m02" exists ...
	I0729 18:05:39.813170  114115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:05:39.813202  114115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:05:39.827773  114115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40909
	I0729 18:05:39.828094  114115 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:05:39.828514  114115 main.go:141] libmachine: Using API Version  1
	I0729 18:05:39.828533  114115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:05:39.828825  114115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:05:39.829000  114115 main.go:141] libmachine: (ha-794405-m02) Calling .DriverName
	I0729 18:05:39.829184  114115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:05:39.829203  114115 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHHostname
	I0729 18:05:39.831729  114115 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 18:05:39.832146  114115 main.go:141] libmachine: (ha-794405-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:4a:02", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 19:00:53 +0000 UTC Type:0 Mac:52:54:00:1a:4a:02 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-794405-m02 Clientid:01:52:54:00:1a:4a:02}
	I0729 18:05:39.832166  114115 main.go:141] libmachine: (ha-794405-m02) DBG | domain ha-794405-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:1a:4a:02 in network mk-ha-794405
	I0729 18:05:39.832298  114115 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHPort
	I0729 18:05:39.832444  114115 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHKeyPath
	I0729 18:05:39.832577  114115 main.go:141] libmachine: (ha-794405-m02) Calling .GetSSHUsername
	I0729 18:05:39.832701  114115 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m02/id_rsa Username:docker}
	I0729 18:05:39.918661  114115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:05:39.939491  114115 kubeconfig.go:125] found "ha-794405" server: "https://192.168.39.254:8443"
	I0729 18:05:39.939521  114115 api_server.go:166] Checking apiserver status ...
	I0729 18:05:39.939555  114115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:05:39.957156  114115 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W0729 18:05:39.968410  114115 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:05:39.968458  114115 ssh_runner.go:195] Run: ls
	I0729 18:05:39.973340  114115 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:05:39.977643  114115 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:05:39.977664  114115 status.go:422] ha-794405-m02 apiserver status = Running (err=<nil>)
	I0729 18:05:39.977672  114115 status.go:257] ha-794405-m02 status: &{Name:ha-794405-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:05:39.977687  114115 status.go:255] checking status of ha-794405-m04 ...
	I0729 18:05:39.977969  114115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:05:39.978003  114115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:05:39.993498  114115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39447
	I0729 18:05:39.994085  114115 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:05:39.994732  114115 main.go:141] libmachine: Using API Version  1
	I0729 18:05:39.994760  114115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:05:39.995097  114115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:05:39.995303  114115 main.go:141] libmachine: (ha-794405-m04) Calling .GetState
	I0729 18:05:39.996970  114115 status.go:330] ha-794405-m04 host status = "Running" (err=<nil>)
	I0729 18:05:39.996994  114115 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 18:05:39.997388  114115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:05:39.997431  114115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:05:40.013036  114115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I0729 18:05:40.013385  114115 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:05:40.013823  114115 main.go:141] libmachine: Using API Version  1
	I0729 18:05:40.013838  114115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:05:40.014166  114115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:05:40.014363  114115 main.go:141] libmachine: (ha-794405-m04) Calling .GetIP
	I0729 18:05:40.016822  114115 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 18:05:40.017276  114115 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 19:03:07 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 18:05:40.017315  114115 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 18:05:40.017453  114115 host.go:66] Checking if "ha-794405-m04" exists ...
	I0729 18:05:40.017758  114115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:05:40.017803  114115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:05:40.032143  114115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45035
	I0729 18:05:40.032522  114115 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:05:40.032946  114115 main.go:141] libmachine: Using API Version  1
	I0729 18:05:40.032964  114115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:05:40.033224  114115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:05:40.033401  114115 main.go:141] libmachine: (ha-794405-m04) Calling .DriverName
	I0729 18:05:40.033588  114115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:05:40.033609  114115 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHHostname
	I0729 18:05:40.036413  114115 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 18:05:40.036791  114115 main.go:141] libmachine: (ha-794405-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d3:c3", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 19:03:07 +0000 UTC Type:0 Mac:52:54:00:9f:d3:c3 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-794405-m04 Clientid:01:52:54:00:9f:d3:c3}
	I0729 18:05:40.036822  114115 main.go:141] libmachine: (ha-794405-m04) DBG | domain ha-794405-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:9f:d3:c3 in network mk-ha-794405
	I0729 18:05:40.037014  114115 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHPort
	I0729 18:05:40.037196  114115 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHKeyPath
	I0729 18:05:40.037358  114115 main.go:141] libmachine: (ha-794405-m04) Calling .GetSSHUsername
	I0729 18:05:40.037591  114115 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405-m04/id_rsa Username:docker}
	W0729 18:05:58.473089  114115 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.179:22: connect: no route to host
	W0729 18:05:58.473212  114115 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.179:22: connect: no route to host
	E0729 18:05:58.473236  114115 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.179:22: connect: no route to host
	I0729 18:05:58.473250  114115 status.go:257] ha-794405-m04 status: &{Name:ha-794405-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 18:05:58.473282  114115 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.179:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-794405 -n ha-794405
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-794405 logs -n 25: (1.748028228s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-794405 ssh -n ha-794405-m02 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m03_ha-794405-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04:/home/docker/cp-test_ha-794405-m03_ha-794405-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m04 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m03_ha-794405-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp testdata/cp-test.txt                                                | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1800704997/001/cp-test_ha-794405-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405:/home/docker/cp-test_ha-794405-m04_ha-794405.txt                       |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405 sudo cat                                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405.txt                                 |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m02:/home/docker/cp-test_ha-794405-m04_ha-794405-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m02 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m03:/home/docker/cp-test_ha-794405-m04_ha-794405-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n                                                                 | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | ha-794405-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-794405 ssh -n ha-794405-m03 sudo cat                                          | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC | 29 Jul 24 17:53 UTC |
	|         | /home/docker/cp-test_ha-794405-m04_ha-794405-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-794405 node stop m02 -v=7                                                     | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-794405 node start m02 -v=7                                                    | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-794405 -v=7                                                           | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-794405 -v=7                                                                | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-794405 --wait=true -v=7                                                    | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 17:59 UTC | 29 Jul 24 18:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-794405                                                                | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 18:03 UTC |                     |
	| node    | ha-794405 node delete m03 -v=7                                                   | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 18:03 UTC | 29 Jul 24 18:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-794405 stop -v=7                                                              | ha-794405 | jenkins | v1.33.1 | 29 Jul 24 18:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:59:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:59:08.142945  111957 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:59:08.143224  111957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:59:08.143235  111957 out.go:304] Setting ErrFile to fd 2...
	I0729 17:59:08.143242  111957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:59:08.143449  111957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:59:08.143999  111957 out.go:298] Setting JSON to false
	I0729 17:59:08.144985  111957 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9668,"bootTime":1722266280,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:59:08.145046  111957 start.go:139] virtualization: kvm guest
	I0729 17:59:08.147872  111957 out.go:177] * [ha-794405] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:59:08.149368  111957 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 17:59:08.149418  111957 notify.go:220] Checking for updates...
	I0729 17:59:08.151939  111957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:59:08.153316  111957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:59:08.154892  111957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:59:08.156295  111957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:59:08.157559  111957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:59:08.159284  111957 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:59:08.159375  111957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:59:08.159857  111957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:59:08.159911  111957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:59:08.175361  111957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37831
	I0729 17:59:08.175864  111957 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:59:08.176422  111957 main.go:141] libmachine: Using API Version  1
	I0729 17:59:08.176444  111957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:59:08.176747  111957 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:59:08.176951  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:59:08.213586  111957 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 17:59:08.214993  111957 start.go:297] selected driver: kvm2
	I0729 17:59:08.215009  111957 start.go:901] validating driver "kvm2" against &{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.179 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:59:08.215175  111957 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:59:08.215488  111957 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:59:08.215577  111957 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:59:08.230967  111957 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:59:08.231615  111957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 17:59:08.231648  111957 cni.go:84] Creating CNI manager for ""
	I0729 17:59:08.231656  111957 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 17:59:08.231733  111957 start.go:340] cluster config:
	{Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.179 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:59:08.231900  111957 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:59:08.234226  111957 out.go:177] * Starting "ha-794405" primary control-plane node in "ha-794405" cluster
	I0729 17:59:08.235488  111957 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 17:59:08.235522  111957 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 17:59:08.235536  111957 cache.go:56] Caching tarball of preloaded images
	I0729 17:59:08.235614  111957 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 17:59:08.235625  111957 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 17:59:08.235760  111957 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/config.json ...
	I0729 17:59:08.235964  111957 start.go:360] acquireMachinesLock for ha-794405: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 17:59:08.236016  111957 start.go:364] duration metric: took 32.947µs to acquireMachinesLock for "ha-794405"
	I0729 17:59:08.236036  111957 start.go:96] Skipping create...Using existing machine configuration
	I0729 17:59:08.236047  111957 fix.go:54] fixHost starting: 
	I0729 17:59:08.236320  111957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:59:08.236358  111957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:59:08.251130  111957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I0729 17:59:08.251518  111957 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:59:08.252055  111957 main.go:141] libmachine: Using API Version  1
	I0729 17:59:08.252077  111957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:59:08.252405  111957 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:59:08.252609  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:59:08.252748  111957 main.go:141] libmachine: (ha-794405) Calling .GetState
	I0729 17:59:08.254319  111957 fix.go:112] recreateIfNeeded on ha-794405: state=Running err=<nil>
	W0729 17:59:08.254336  111957 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 17:59:08.256275  111957 out.go:177] * Updating the running kvm2 "ha-794405" VM ...
	I0729 17:59:08.257481  111957 machine.go:94] provisionDockerMachine start ...
	I0729 17:59:08.257502  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 17:59:08.257699  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.259843  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.260219  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.260248  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.260383  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:59:08.260547  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.260706  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.260825  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:59:08.261042  111957 main.go:141] libmachine: Using SSH client type: native
	I0729 17:59:08.261233  111957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:59:08.261249  111957 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 17:59:08.366286  111957 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-794405
	
	I0729 17:59:08.366324  111957 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:59:08.366616  111957 buildroot.go:166] provisioning hostname "ha-794405"
	I0729 17:59:08.366649  111957 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:59:08.366911  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.369351  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.369736  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.369760  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.369904  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:59:08.370096  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.370248  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.370376  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:59:08.370563  111957 main.go:141] libmachine: Using SSH client type: native
	I0729 17:59:08.370787  111957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:59:08.370800  111957 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-794405 && echo "ha-794405" | sudo tee /etc/hostname
	I0729 17:59:08.490421  111957 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-794405
	
	I0729 17:59:08.490447  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.493087  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.493517  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.493559  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.493718  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:59:08.493901  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.494086  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.494246  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:59:08.494414  111957 main.go:141] libmachine: Using SSH client type: native
	I0729 17:59:08.494581  111957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:59:08.494598  111957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-794405' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-794405/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-794405' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 17:59:08.597829  111957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 17:59:08.597867  111957 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 17:59:08.597918  111957 buildroot.go:174] setting up certificates
	I0729 17:59:08.597931  111957 provision.go:84] configureAuth start
	I0729 17:59:08.597942  111957 main.go:141] libmachine: (ha-794405) Calling .GetMachineName
	I0729 17:59:08.598230  111957 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 17:59:08.600889  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.601231  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.601258  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.601421  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.603821  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.604215  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.604238  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.604406  111957 provision.go:143] copyHostCerts
	I0729 17:59:08.604441  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:59:08.604526  111957 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 17:59:08.604540  111957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 17:59:08.604622  111957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 17:59:08.604725  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:59:08.604753  111957 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 17:59:08.604772  111957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 17:59:08.604822  111957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 17:59:08.604908  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:59:08.604932  111957 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 17:59:08.604941  111957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 17:59:08.604979  111957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 17:59:08.605032  111957 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.ha-794405 san=[127.0.0.1 192.168.39.102 ha-794405 localhost minikube]
	I0729 17:59:08.702069  111957 provision.go:177] copyRemoteCerts
	I0729 17:59:08.702132  111957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 17:59:08.702154  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.704510  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.704814  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.704852  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.704994  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:59:08.705187  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.705373  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:59:08.705538  111957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 17:59:08.788219  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 17:59:08.788298  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 17:59:08.813392  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 17:59:08.813460  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 17:59:08.840257  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 17:59:08.840332  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 17:59:08.863830  111957 provision.go:87] duration metric: took 265.887585ms to configureAuth
	I0729 17:59:08.863850  111957 buildroot.go:189] setting minikube options for container-runtime
	I0729 17:59:08.864066  111957 config.go:182] Loaded profile config "ha-794405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:59:08.864152  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 17:59:08.866645  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.867028  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 17:59:08.867054  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 17:59:08.867214  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 17:59:08.867380  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.867537  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 17:59:08.867680  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 17:59:08.867833  111957 main.go:141] libmachine: Using SSH client type: native
	I0729 17:59:08.868008  111957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 17:59:08.868027  111957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:00:39.813265  111957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:00:39.813299  111957 machine.go:97] duration metric: took 1m31.555799087s to provisionDockerMachine
	I0729 18:00:39.813315  111957 start.go:293] postStartSetup for "ha-794405" (driver="kvm2")
	I0729 18:00:39.813331  111957 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:00:39.813367  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:39.813724  111957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:00:39.813759  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:00:39.817020  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:39.817525  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:39.817552  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:39.817716  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:00:39.817918  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:39.818094  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:00:39.818225  111957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 18:00:39.946580  111957 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:00:39.961958  111957 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:00:39.962000  111957 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:00:39.962068  111957 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:00:39.962188  111957 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:00:39.962205  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /etc/ssl/certs/952822.pem
	I0729 18:00:39.962330  111957 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:00:39.991306  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:00:40.047394  111957 start.go:296] duration metric: took 234.062439ms for postStartSetup
	I0729 18:00:40.047440  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:40.047791  111957 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 18:00:40.047835  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:00:40.050206  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.050710  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:40.050738  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.050896  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:00:40.051097  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:40.051300  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:00:40.051486  111957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	W0729 18:00:40.133134  111957 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 18:00:40.133162  111957 fix.go:56] duration metric: took 1m31.89711748s for fixHost
	I0729 18:00:40.133188  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:00:40.135605  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.135965  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:40.135997  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.136238  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:00:40.136460  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:40.136635  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:40.136748  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:00:40.136967  111957 main.go:141] libmachine: Using SSH client type: native
	I0729 18:00:40.137130  111957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0729 18:00:40.137142  111957 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:00:40.241437  111957 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722276040.206346539
	
	I0729 18:00:40.241461  111957 fix.go:216] guest clock: 1722276040.206346539
	I0729 18:00:40.241469  111957 fix.go:229] Guest: 2024-07-29 18:00:40.206346539 +0000 UTC Remote: 2024-07-29 18:00:40.133170141 +0000 UTC m=+92.025983091 (delta=73.176398ms)
	I0729 18:00:40.241490  111957 fix.go:200] guest clock delta is within tolerance: 73.176398ms
	I0729 18:00:40.241496  111957 start.go:83] releasing machines lock for "ha-794405", held for 1m32.005469225s
	I0729 18:00:40.241514  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:40.241789  111957 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 18:00:40.244372  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.244766  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:40.244799  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.244916  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:40.245443  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:40.245638  111957 main.go:141] libmachine: (ha-794405) Calling .DriverName
	I0729 18:00:40.245769  111957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:00:40.245839  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:00:40.245854  111957 ssh_runner.go:195] Run: cat /version.json
	I0729 18:00:40.245872  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHHostname
	I0729 18:00:40.248396  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.248690  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.248764  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:40.248791  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.248929  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:00:40.249133  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:40.249172  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:40.249197  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:40.249333  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:00:40.249428  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHPort
	I0729 18:00:40.249452  111957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 18:00:40.249576  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHKeyPath
	I0729 18:00:40.249701  111957 main.go:141] libmachine: (ha-794405) Calling .GetSSHUsername
	I0729 18:00:40.249872  111957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/ha-794405/id_rsa Username:docker}
	I0729 18:00:40.348184  111957 ssh_runner.go:195] Run: systemctl --version
	I0729 18:00:40.354452  111957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:00:40.512704  111957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:00:40.519487  111957 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:00:40.519548  111957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:00:40.531469  111957 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 18:00:40.531493  111957 start.go:495] detecting cgroup driver to use...
	I0729 18:00:40.531566  111957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:00:40.551271  111957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:00:40.565558  111957 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:00:40.565608  111957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:00:40.579329  111957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:00:40.593470  111957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:00:40.756228  111957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:00:40.906230  111957 docker.go:233] disabling docker service ...
	I0729 18:00:40.906299  111957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:00:40.927321  111957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:00:40.940915  111957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:00:41.087497  111957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:00:41.230537  111957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:00:41.244388  111957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:00:41.263696  111957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:00:41.263762  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.274325  111957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:00:41.274395  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.285082  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.296112  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.307099  111957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:00:41.318164  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.328737  111957 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.340389  111957 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:00:41.351200  111957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:00:41.360961  111957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:00:41.370832  111957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:00:41.512835  111957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:00:41.824008  111957 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:00:41.824080  111957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:00:41.829409  111957 start.go:563] Will wait 60s for crictl version
	I0729 18:00:41.829475  111957 ssh_runner.go:195] Run: which crictl
	I0729 18:00:41.833321  111957 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:00:41.873470  111957 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:00:41.873553  111957 ssh_runner.go:195] Run: crio --version
	I0729 18:00:41.917901  111957 ssh_runner.go:195] Run: crio --version
	I0729 18:00:41.946963  111957 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:00:41.948061  111957 main.go:141] libmachine: (ha-794405) Calling .GetIP
	I0729 18:00:41.950817  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:41.951203  111957 main.go:141] libmachine: (ha-794405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:77:cc", ip: ""} in network mk-ha-794405: {Iface:virbr1 ExpiryTime:2024-07-29 18:49:16 +0000 UTC Type:0 Mac:52:54:00:a5:77:cc Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-794405 Clientid:01:52:54:00:a5:77:cc}
	I0729 18:00:41.951225  111957 main.go:141] libmachine: (ha-794405) DBG | domain ha-794405 has defined IP address 192.168.39.102 and MAC address 52:54:00:a5:77:cc in network mk-ha-794405
	I0729 18:00:41.951437  111957 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:00:41.955836  111957 kubeadm.go:883] updating cluster {Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.179 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:00:41.955970  111957 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:00:41.956035  111957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:00:42.000669  111957 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:00:42.000691  111957 crio.go:433] Images already preloaded, skipping extraction
	I0729 18:00:42.000752  111957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:00:42.034072  111957 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:00:42.034103  111957 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:00:42.034122  111957 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0729 18:00:42.034255  111957 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-794405 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:00:42.034330  111957 ssh_runner.go:195] Run: crio config
	I0729 18:00:42.085918  111957 cni.go:84] Creating CNI manager for ""
	I0729 18:00:42.085939  111957 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 18:00:42.085952  111957 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:00:42.085974  111957 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-794405 NodeName:ha-794405 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:00:42.086138  111957 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-794405"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:00:42.086166  111957 kube-vip.go:115] generating kube-vip config ...
	I0729 18:00:42.086204  111957 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 18:00:42.098786  111957 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 18:00:42.098923  111957 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 18:00:42.098982  111957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:00:42.108676  111957 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:00:42.108739  111957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 18:00:42.118580  111957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 18:00:42.134869  111957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:00:42.150948  111957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 18:00:42.169421  111957 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 18:00:42.186352  111957 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 18:00:42.191342  111957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:00:42.332316  111957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:00:42.347734  111957 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405 for IP: 192.168.39.102
	I0729 18:00:42.347770  111957 certs.go:194] generating shared ca certs ...
	I0729 18:00:42.347792  111957 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:00:42.347969  111957 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 18:00:42.348060  111957 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 18:00:42.348081  111957 certs.go:256] generating profile certs ...
	I0729 18:00:42.348200  111957 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/client.key
	I0729 18:00:42.348234  111957 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.730bf625
	I0729 18:00:42.348255  111957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.730bf625 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.62 192.168.39.185 192.168.39.254]
	I0729 18:00:42.546043  111957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.730bf625 ...
	I0729 18:00:42.546073  111957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.730bf625: {Name:mk5301d530a01d92ef5bab28ae80c6673c6ba236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:00:42.546247  111957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.730bf625 ...
	I0729 18:00:42.546259  111957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.730bf625: {Name:mk898eda88f9fdc9bcded3f5997d6f47978cfb97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:00:42.546328  111957 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt.730bf625 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt
	I0729 18:00:42.546478  111957 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key.730bf625 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key
	I0729 18:00:42.546612  111957 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key
	I0729 18:00:42.546629  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:00:42.546643  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:00:42.546655  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:00:42.546666  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:00:42.546679  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:00:42.546691  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:00:42.546703  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:00:42.546714  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:00:42.546771  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 18:00:42.546799  111957 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 18:00:42.546809  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:00:42.546836  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:00:42.546860  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:00:42.546880  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 18:00:42.546921  111957 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:00:42.546951  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem -> /usr/share/ca-certificates/95282.pem
	I0729 18:00:42.546965  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /usr/share/ca-certificates/952822.pem
	I0729 18:00:42.546977  111957 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:00:42.547539  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:00:42.572631  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:00:42.596920  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:00:42.621420  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:00:42.646491  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:00:42.670020  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:00:42.694118  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:00:42.719018  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/ha-794405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:00:42.742812  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 18:00:42.766517  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 18:00:42.789686  111957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:00:42.813798  111957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:00:42.830300  111957 ssh_runner.go:195] Run: openssl version
	I0729 18:00:42.836403  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 18:00:42.847285  111957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 18:00:42.851729  111957 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:00:42.851772  111957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 18:00:42.857305  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:00:42.866656  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:00:42.877340  111957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:00:42.881654  111957 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:00:42.881717  111957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:00:42.888664  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:00:42.899251  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 18:00:42.910295  111957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 18:00:42.915048  111957 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:00:42.915096  111957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 18:00:42.920927  111957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 18:00:42.930524  111957 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:00:42.934909  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:00:42.940428  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:00:42.946762  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:00:42.952036  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:00:42.957559  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:00:42.962949  111957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:00:42.968378  111957 kubeadm.go:392] StartCluster: {Name:ha-794405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-794405 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.179 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:00:42.968516  111957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:00:42.968565  111957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:00:43.004684  111957 cri.go:89] found id: "55137b5d553c696c4ffdc76b20bdcde1fb2f35602b34e8d264c1438a368c4f42"
	I0729 18:00:43.004708  111957 cri.go:89] found id: "51fad57579f21b5a011457c5c739093243b5f3b431b98db8b3b8f92ac916c53d"
	I0729 18:00:43.004714  111957 cri.go:89] found id: "bb65af2e22c6c1281cad453043b942b0fe5f6f716984cf6ab1a92f89ab851ea9"
	I0729 18:00:43.004718  111957 cri.go:89] found id: "b36f95de1e765db7360f4c567999293aceaf13ae2301b194f86db86199e2fd58"
	I0729 18:00:43.004722  111957 cri.go:89] found id: "34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca"
	I0729 18:00:43.004727  111957 cri.go:89] found id: "11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d"
	I0729 18:00:43.004731  111957 cri.go:89] found id: "5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5"
	I0729 18:00:43.004734  111957 cri.go:89] found id: "2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f"
	I0729 18:00:43.004737  111957 cri.go:89] found id: "83c7e5300596ed794752e31ceb8cb03e339b3ea52305f02c83e577378000130f"
	I0729 18:00:43.004745  111957 cri.go:89] found id: "152a9fa24ee44b5e9f21db72728d07a6432ed62f3a5c2c05ca7c1cd6de36609a"
	I0729 18:00:43.004748  111957 cri.go:89] found id: "985c673864e1a2dafce96d68ada1dba868c8e47de790d46e403469f1abd8bd8e"
	I0729 18:00:43.004750  111957 cri.go:89] found id: "fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25"
	I0729 18:00:43.004753  111957 cri.go:89] found id: "e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e"
	I0729 18:00:43.004757  111957 cri.go:89] found id: ""
	I0729 18:00:43.004810  111957 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.073869411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276359073847104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91cb657e-96db-4d86-9632-bf9c00869243 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.074319966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8bb4ae2-bd7d-411d-9acc-9c4dccbe0a3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.074493088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8bb4ae2-bd7d-411d-9acc-9c4dccbe0a3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.074926052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02bdfc68d9f617cb2aad358d4bd0e4768765ad7a78e69de442e0a2a7b9e481e7,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276106500049710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b09eb16bdfe9b6222221af56f9412362f12c483a4f77154ecc705b5c7446d76,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722276090504934171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eb8375f5352d20d949845ff3b0b2c7daa0aae371096814bf64f44c7d52ed79,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722276082502335996,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df50cbfea05aafd3a106c6ea17ae0d08034a824cbe3f7ca66fc3e1dd432c3285,PodSandboxId:27519eeed1ad345199442cc5514bfaec3420dc11a5be9e87d038b1aa9043e3b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722276078775418847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b1d684046b21ac59ccf46553ab97cb19d10497274411a8820a605f8f912558,PodSandboxId:bd474b6e52dd8f4258494c02353a9706bf4a115ac08edc9fda4665cfe0d21d16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722276057314132648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e018d547ebb2f3d9e79e0b37116ab4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c70a5ed56917c765a69bdfe93f88f8dcab61db45495c6865aec4a2e5fa2d0,PodSandboxId:6adda675354bdd09ae1a31cdcaf4432f224e4346f92899e8f69f8330d4119235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722276045685227933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276045706163534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a58f0d56b3f0a58524ef65306b9ede0fb87c7c9b3fbd2288bffc08a5348a9ae2,PodSandboxId:594dffd8c666427c9f0b82e80bdf51ba17082f244fb2d95a528ab0e7db628de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045653993704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbab85d8958b230048a5f45b40bce6e9b85e71b81ec97b2ea07de656b74bc98,PodSandboxId:364906e40ee243740f44e0b1115eba9f7096cbbecde537d539257461a1a2aaf0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722276045539109692,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426ab87d58d7f19613e1653e84994b65699a76b375341068d0de90ba2bd56b1a,PodSandboxId:755e87bbe77b8a6dfcf12c237786a46b62272bbb1859cbe73b505d887f36177d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045638685182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da050dd3d84fd7b5418eca925ae183e7d0f42e560d64a9e2cd6ff830ca57e2b,PodSandboxId:0952300a118864972c95ae0c0c7728d1d8363fc27b640cd2dbe8d9f66d92af0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722276045458799949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc14f09da5ac482d6b2832f18bc5db12bd83b21f6e619f8e9df03c691c6ce88,PodSandboxId:5f94d6def70c976cd013e8895da23850086d4cd552edcaf2b6131cb4d4305cc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722276045423949312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92
749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722276045331933115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e390
55d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722276045213219904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275551525114402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416642746499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernet
es.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416596729390,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275404641609024,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275401434022832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275381004990011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275380962302482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8bb4ae2-bd7d-411d-9acc-9c4dccbe0a3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.122556189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1cd046b8-e998-4c4a-9e17-8ca1219bd9db name=/runtime.v1.RuntimeService/Version
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.122783765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1cd046b8-e998-4c4a-9e17-8ca1219bd9db name=/runtime.v1.RuntimeService/Version
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.124003160Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bdf9761a-9c59-4169-b804-6b6946112f4c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.124520325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276359124495175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdf9761a-9c59-4169-b804-6b6946112f4c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.124997629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d04da31-f58d-4920-9936-ff6053c37a69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.125058683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d04da31-f58d-4920-9936-ff6053c37a69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.125534098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02bdfc68d9f617cb2aad358d4bd0e4768765ad7a78e69de442e0a2a7b9e481e7,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276106500049710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b09eb16bdfe9b6222221af56f9412362f12c483a4f77154ecc705b5c7446d76,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722276090504934171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eb8375f5352d20d949845ff3b0b2c7daa0aae371096814bf64f44c7d52ed79,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722276082502335996,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df50cbfea05aafd3a106c6ea17ae0d08034a824cbe3f7ca66fc3e1dd432c3285,PodSandboxId:27519eeed1ad345199442cc5514bfaec3420dc11a5be9e87d038b1aa9043e3b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722276078775418847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b1d684046b21ac59ccf46553ab97cb19d10497274411a8820a605f8f912558,PodSandboxId:bd474b6e52dd8f4258494c02353a9706bf4a115ac08edc9fda4665cfe0d21d16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722276057314132648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e018d547ebb2f3d9e79e0b37116ab4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c70a5ed56917c765a69bdfe93f88f8dcab61db45495c6865aec4a2e5fa2d0,PodSandboxId:6adda675354bdd09ae1a31cdcaf4432f224e4346f92899e8f69f8330d4119235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722276045685227933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276045706163534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a58f0d56b3f0a58524ef65306b9ede0fb87c7c9b3fbd2288bffc08a5348a9ae2,PodSandboxId:594dffd8c666427c9f0b82e80bdf51ba17082f244fb2d95a528ab0e7db628de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045653993704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbab85d8958b230048a5f45b40bce6e9b85e71b81ec97b2ea07de656b74bc98,PodSandboxId:364906e40ee243740f44e0b1115eba9f7096cbbecde537d539257461a1a2aaf0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722276045539109692,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426ab87d58d7f19613e1653e84994b65699a76b375341068d0de90ba2bd56b1a,PodSandboxId:755e87bbe77b8a6dfcf12c237786a46b62272bbb1859cbe73b505d887f36177d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045638685182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da050dd3d84fd7b5418eca925ae183e7d0f42e560d64a9e2cd6ff830ca57e2b,PodSandboxId:0952300a118864972c95ae0c0c7728d1d8363fc27b640cd2dbe8d9f66d92af0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722276045458799949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc14f09da5ac482d6b2832f18bc5db12bd83b21f6e619f8e9df03c691c6ce88,PodSandboxId:5f94d6def70c976cd013e8895da23850086d4cd552edcaf2b6131cb4d4305cc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722276045423949312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92
749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722276045331933115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e390
55d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722276045213219904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275551525114402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416642746499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernet
es.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416596729390,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275404641609024,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275401434022832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275381004990011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275380962302482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d04da31-f58d-4920-9936-ff6053c37a69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.174042443Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f051360-cb2d-4b4a-ab1b-4718f344ac1a name=/runtime.v1.RuntimeService/Version
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.174113826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f051360-cb2d-4b4a-ab1b-4718f344ac1a name=/runtime.v1.RuntimeService/Version
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.175318634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dcdf04c1-4d28-4789-afde-b5febc2225f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.175977977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276359175946311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcdf04c1-4d28-4789-afde-b5febc2225f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.176803550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2639f28b-8e8c-43d8-8ee5-1521d979acd6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.176863316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2639f28b-8e8c-43d8-8ee5-1521d979acd6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.177277220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02bdfc68d9f617cb2aad358d4bd0e4768765ad7a78e69de442e0a2a7b9e481e7,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276106500049710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b09eb16bdfe9b6222221af56f9412362f12c483a4f77154ecc705b5c7446d76,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722276090504934171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eb8375f5352d20d949845ff3b0b2c7daa0aae371096814bf64f44c7d52ed79,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722276082502335996,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df50cbfea05aafd3a106c6ea17ae0d08034a824cbe3f7ca66fc3e1dd432c3285,PodSandboxId:27519eeed1ad345199442cc5514bfaec3420dc11a5be9e87d038b1aa9043e3b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722276078775418847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b1d684046b21ac59ccf46553ab97cb19d10497274411a8820a605f8f912558,PodSandboxId:bd474b6e52dd8f4258494c02353a9706bf4a115ac08edc9fda4665cfe0d21d16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722276057314132648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e018d547ebb2f3d9e79e0b37116ab4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c70a5ed56917c765a69bdfe93f88f8dcab61db45495c6865aec4a2e5fa2d0,PodSandboxId:6adda675354bdd09ae1a31cdcaf4432f224e4346f92899e8f69f8330d4119235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722276045685227933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276045706163534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a58f0d56b3f0a58524ef65306b9ede0fb87c7c9b3fbd2288bffc08a5348a9ae2,PodSandboxId:594dffd8c666427c9f0b82e80bdf51ba17082f244fb2d95a528ab0e7db628de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045653993704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbab85d8958b230048a5f45b40bce6e9b85e71b81ec97b2ea07de656b74bc98,PodSandboxId:364906e40ee243740f44e0b1115eba9f7096cbbecde537d539257461a1a2aaf0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722276045539109692,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426ab87d58d7f19613e1653e84994b65699a76b375341068d0de90ba2bd56b1a,PodSandboxId:755e87bbe77b8a6dfcf12c237786a46b62272bbb1859cbe73b505d887f36177d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045638685182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da050dd3d84fd7b5418eca925ae183e7d0f42e560d64a9e2cd6ff830ca57e2b,PodSandboxId:0952300a118864972c95ae0c0c7728d1d8363fc27b640cd2dbe8d9f66d92af0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722276045458799949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc14f09da5ac482d6b2832f18bc5db12bd83b21f6e619f8e9df03c691c6ce88,PodSandboxId:5f94d6def70c976cd013e8895da23850086d4cd552edcaf2b6131cb4d4305cc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722276045423949312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92
749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722276045331933115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e390
55d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722276045213219904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275551525114402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416642746499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernet
es.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416596729390,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275404641609024,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275401434022832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275381004990011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275380962302482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2639f28b-8e8c-43d8-8ee5-1521d979acd6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.223777419Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7eeb661f-fce3-48a6-9dc9-b800cdb92f82 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.223851051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7eeb661f-fce3-48a6-9dc9-b800cdb92f82 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.227665638Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21c1974f-6f54-4ce4-8795-13d1b13c0048 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.229094751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722276359228989655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21c1974f-6f54-4ce4-8795-13d1b13c0048 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.231701132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95b785c6-dc9d-4eb0-9095-e01075b6481d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.231797872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95b785c6-dc9d-4eb0-9095-e01075b6481d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:05:59 ha-794405 crio[3918]: time="2024-07-29 18:05:59.232225720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02bdfc68d9f617cb2aad358d4bd0e4768765ad7a78e69de442e0a2a7b9e481e7,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722276106500049710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b09eb16bdfe9b6222221af56f9412362f12c483a4f77154ecc705b5c7446d76,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722276090504934171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e39055d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eb8375f5352d20d949845ff3b0b2c7daa0aae371096814bf64f44c7d52ed79,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722276082502335996,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annotations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df50cbfea05aafd3a106c6ea17ae0d08034a824cbe3f7ca66fc3e1dd432c3285,PodSandboxId:27519eeed1ad345199442cc5514bfaec3420dc11a5be9e87d038b1aa9043e3b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722276078775418847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotations:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07b1d684046b21ac59ccf46553ab97cb19d10497274411a8820a605f8f912558,PodSandboxId:bd474b6e52dd8f4258494c02353a9706bf4a115ac08edc9fda4665cfe0d21d16,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722276057314132648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e018d547ebb2f3d9e79e0b37116ab4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c70a5ed56917c765a69bdfe93f88f8dcab61db45495c6865aec4a2e5fa2d0,PodSandboxId:6adda675354bdd09ae1a31cdcaf4432f224e4346f92899e8f69f8330d4119235,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722276045685227933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:5f9e8665b504be0996b984457804170679d11fc9dcecd8236afb3f70519b4827,PodSandboxId:fe6509553c8c1f8ede5c703c4e7e0008dd106f146c9bc7cd735db49b173cda4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722276045706163534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e08d093-f8b5-4614-9be2-5832f7cafa75,},Annotations:map[string]string{io.kubernetes.container.hash: c0dcef9d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a58f0d56b3f0a58524ef65306b9ede0fb87c7c9b3fbd2288bffc08a5348a9ae2,PodSandboxId:594dffd8c666427c9f0b82e80bdf51ba17082f244fb2d95a528ab0e7db628de7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045653993704,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernetes.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbab85d8958b230048a5f45b40bce6e9b85e71b81ec97b2ea07de656b74bc98,PodSandboxId:364906e40ee243740f44e0b1115eba9f7096cbbecde537d539257461a1a2aaf0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722276045539109692,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426ab87d58d7f19613e1653e84994b65699a76b375341068d0de90ba2bd56b1a,PodSandboxId:755e87bbe77b8a6dfcf12c237786a46b62272bbb1859cbe73b505d887f36177d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722276045638685182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da050dd3d84fd7b5418eca925ae183e7d0f42e560d64a9e2cd6ff830ca57e2b,PodSandboxId:0952300a118864972c95ae0c0c7728d1d8363fc27b640cd2dbe8d9f66d92af0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722276045458799949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc14f09da5ac482d6b2832f18bc5db12bd83b21f6e619f8e9df03c691c6ce88,PodSandboxId:5f94d6def70c976cd013e8895da23850086d4cd552edcaf2b6131cb4d4305cc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722276045423949312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92
749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc,PodSandboxId:78ae745caafbbdc66552c4636aa0eab113341924b6111c26f210dd0991f3a6c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722276045331933115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e530a81d1e9a9e390
55d59309a089fd,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475,PodSandboxId:665b3ece6fea2d40b871d04c74a1ed318e7192c4c50e154e0347eefd0c721dbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722276045213219904,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5110118fe5cf51b6a61d9f9785be3c3c,},Annot
ations:map[string]string{io.kubernetes.container.hash: caa197,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882dc7ddd36ca559e59fbe1b16606ac3c40a7268770e2f675775826c1aa17280,PodSandboxId:030fd183fc5d70136c09a8b7e0e2865f44dd2a213638647e35bdbda279b830ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722275551525114402,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9t4xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ceb96a8b-de79-4d8b-a767-8e61b163b088,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 341563b5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca,PodSandboxId:0a85f31b7216e5e441bcfe38479d4617dde44adc1b035dce9e95d895dee48f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416642746499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nzvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e2c116-2549-4e1a-8d79-cd86595db9f3,},Annotations:map[string]string{io.kubernet
es.container.hash: ea8e4842,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d,PodSandboxId:c21b66fe5a20a39a06ae0c7c25ee683e7eba20a1e456cb12273df31e8c1144bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722275416596729390,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-7db6d8ff4d-bb2jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9ad335-25b2-4e6c-a523-47b06ce713dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29990db6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5,PodSandboxId:a04c14b520cac4fb30d6126c231007d3c29694f71329096e36e4531123b8d5f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722275404641609024,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j4l89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b81d74-531b-4878-84ea-654e7b57f0ba,},Annotations:map[string]string{io.kubernetes.container.hash: a8506c0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f,PodSandboxId:afea598394fc694045c2dd49ea65df7a7559d4cad31d50a2ddcf34b62d3e506f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722275401434022832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-llkz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95536eff-3f12-4a7e-9504-c8f6b1acc4cb,},Annotations:map[string]string{io.kubernetes.container.hash: 6b14763f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25,PodSandboxId:da888d4d893d610e002957f561d1f310068a089f638fa6e2f658d571ef154999,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722275381004990011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c874b85b6752de4391e8b92749861ca9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e,PodSandboxId:a93bf9947672ad13675c5da4da58a497db7376bebf175bf0121b9f445e340e54,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd
477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722275380962302482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-794405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d262369b7075ef1593bfc8c891dbcd,},Annotations:map[string]string{io.kubernetes.container.hash: 7df42e99,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95b785c6-dc9d-4eb0-9095-e01075b6481d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	02bdfc68d9f61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   fe6509553c8c1       storage-provisioner
	3b09eb16bdfe9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   78ae745caafbb       kube-controller-manager-ha-794405
	45eb8375f5352       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   665b3ece6fea2       kube-apiserver-ha-794405
	df50cbfea05aa       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   27519eeed1ad3       busybox-fc5497c4f-9t4xg
	07b1d684046b2       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   bd474b6e52dd8       kube-vip-ha-794405
	5f9e8665b504b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   fe6509553c8c1       storage-provisioner
	3f8c70a5ed569       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   6adda675354bd       kube-proxy-llkz8
	a58f0d56b3f0a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   594dffd8c6664       coredns-7db6d8ff4d-nzvff
	426ab87d58d7f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   755e87bbe77b8       coredns-7db6d8ff4d-bb2jg
	7dbab85d8958b       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   364906e40ee24       kindnet-j4l89
	8da050dd3d84f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   0952300a11886       etcd-ha-794405
	3fc14f09da5ac       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   5f94d6def70c9       kube-scheduler-ha-794405
	ad32ae050fd04       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   78ae745caafbb       kube-controller-manager-ha-794405
	b81d1356b5384       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   665b3ece6fea2       kube-apiserver-ha-794405
	882dc7ddd36ca       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   030fd183fc5d7       busybox-fc5497c4f-9t4xg
	34646ba311f51       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   0a85f31b7216e       coredns-7db6d8ff4d-nzvff
	11e098645d7d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   c21b66fe5a20a       coredns-7db6d8ff4d-bb2jg
	5005f4869048e       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    15 minutes ago      Exited              kindnet-cni               0                   a04c14b520cac       kindnet-j4l89
	2992a8242c5e7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      15 minutes ago      Exited              kube-proxy                0                   afea598394fc6       kube-proxy-llkz8
	fca3429715988       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   da888d4d893d6       kube-scheduler-ha-794405
	e224997d35927       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   a93bf9947672a       etcd-ha-794405
	
	
	==> coredns [11e098645d7d850c038085ef2398844bd0fc149ede4fc04827afc44a6ff0c20d] <==
	[INFO] 10.244.0.4:57455 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010734s
	[INFO] 10.244.0.4:49757 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134817s
	[INFO] 10.244.0.4:34537 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083091s
	[INFO] 10.244.0.4:59243 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094884s
	[INFO] 10.244.0.4:32813 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194094s
	[INFO] 10.244.1.2:51380 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001717695s
	[INFO] 10.244.1.2:41977 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084863s
	[INFO] 10.244.1.2:45990 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090641s
	[INFO] 10.244.1.2:55905 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128239s
	[INFO] 10.244.1.2:57839 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092047s
	[INFO] 10.244.0.4:52553 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155036s
	[INFO] 10.244.0.4:60833 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116165s
	[INFO] 10.244.0.4:58984 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096169s
	[INFO] 10.244.1.2:56581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099926s
	[INFO] 10.244.2.2:47299 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251364s
	[INFO] 10.244.2.2:54140 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131767s
	[INFO] 10.244.0.4:37906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128168s
	[INFO] 10.244.0.4:53897 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128545s
	[INFO] 10.244.0.4:42232 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175859s
	[INFO] 10.244.1.2:58375 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000225865s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [34646ba311f51478411d1b66560efa7481f439c4be5e7764de99aa5dc1d517ca] <==
	[INFO] 10.244.0.4:49557 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090727s
	[INFO] 10.244.0.4:33820 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001835803s
	[INFO] 10.244.0.4:39762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001456019s
	[INFO] 10.244.1.2:49407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010484s
	[INFO] 10.244.1.2:41901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153055s
	[INFO] 10.244.1.2:46891 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001271955s
	[INFO] 10.244.2.2:49560 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127808s
	[INFO] 10.244.2.2:56119 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007809s
	[INFO] 10.244.2.2:38291 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002272s
	[INFO] 10.244.2.2:47373 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074396s
	[INFO] 10.244.0.4:48660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051359s
	[INFO] 10.244.1.2:45618 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016309s
	[INFO] 10.244.1.2:34022 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090959s
	[INFO] 10.244.1.2:55925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000187604s
	[INFO] 10.244.2.2:52948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132206s
	[INFO] 10.244.2.2:50512 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133066s
	[INFO] 10.244.0.4:56090 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011653s
	[INFO] 10.244.1.2:53420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109055s
	[INFO] 10.244.1.2:51296 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101897s
	[INFO] 10.244.1.2:36056 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072778s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [426ab87d58d7f19613e1653e84994b65699a76b375341068d0de90ba2bd56b1a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.8:44156->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[923007543]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:00:57.195) (total time: 10679ms):
	Trace[923007543]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.8:44156->10.96.0.1:443: read: connection reset by peer 10675ms (18:01:07.871)
	Trace[923007543]: [10.679405952s] [10.679405952s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.8:44156->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a58f0d56b3f0a58524ef65306b9ede0fb87c7c9b3fbd2288bffc08a5348a9ae2] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:34654->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:34654->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:34660->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:34660->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-794405
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T17_49_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:49:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:05:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:01:32 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:01:32 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:01:32 +0000   Mon, 29 Jul 2024 17:49:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:01:32 +0000   Mon, 29 Jul 2024 17:50:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-794405
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f5d049fcd1645d38ff56c6e587d83f8
	  System UUID:                4f5d049f-cd16-45d3-8ff5-6c6e587d83f8
	  Boot ID:                    a36bbb12-7ddf-423d-b68c-d781a4b4af75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9t4xg              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-bb2jg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-nzvff             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-794405                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-j4l89                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-794405             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-794405    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-llkz8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-794405             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-794405                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 15m    kube-proxy       
	  Normal   Starting                 4m33s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-794405 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-794405 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-794405 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m    node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal   NodeReady                15m    kubelet          Node ha-794405 status is now: NodeReady
	  Normal   RegisteredNode           14m    node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Warning  ContainerGCFailed        6m12s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m23s  node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal   RegisteredNode           4m17s  node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	  Normal   RegisteredNode           3m10s  node-controller  Node ha-794405 event: Registered Node ha-794405 in Controller
	
	
	Name:               ha-794405-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_50_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:50:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:05:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:04:18 +0000   Mon, 29 Jul 2024 18:04:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:04:18 +0000   Mon, 29 Jul 2024 18:04:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:04:18 +0000   Mon, 29 Jul 2024 18:04:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:04:18 +0000   Mon, 29 Jul 2024 18:04:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-794405-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 437dda8ebd384bf294c14831928d98f5
	  System UUID:                437dda8e-bd38-4bf2-94c1-4831928d98f5
	  Boot ID:                    c1b6964d-d82c-4781-a4fc-aca957036bf0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kq6g2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-794405-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-8qgq5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-794405-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-794405-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-qcmxl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-794405-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-794405-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-794405-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-794405-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-794405-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-794405-m02 status is now: NodeNotReady
	  Normal  Starting                 4m57s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m57s)  kubelet          Node ha-794405-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m57s)  kubelet          Node ha-794405-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m57s)  kubelet          Node ha-794405-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-794405-m02 event: Registered Node ha-794405-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-794405-m02 status is now: NodeNotReady
	
	
	Name:               ha-794405-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-794405-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=ha-794405
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T17_53_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 17:53:04 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-794405-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:03:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 18:03:12 +0000   Mon, 29 Jul 2024 18:04:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 18:03:12 +0000   Mon, 29 Jul 2024 18:04:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 18:03:12 +0000   Mon, 29 Jul 2024 18:04:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 18:03:12 +0000   Mon, 29 Jul 2024 18:04:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    ha-794405-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2eee0b726b504b318de9dcda1a6d7202
	  System UUID:                2eee0b72-6b50-4b31-8de9-dcda1a6d7202
	  Boot ID:                    ed914ce7-3f75-4141-a6a5-d94ed455ac91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9bpw9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-ndgvz              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-nrw9z           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-794405-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-794405-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-794405-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-794405-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   RegisteredNode           4m17s                  node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   NodeNotReady             3m43s                  node-controller  Node ha-794405-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-794405-m04 event: Registered Node ha-794405-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-794405-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-794405-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-794405-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-794405-m04 has been rebooted, boot id: ed914ce7-3f75-4141-a6a5-d94ed455ac91
	  Normal   NodeReady                2m47s                  kubelet          Node ha-794405-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s                   node-controller  Node ha-794405-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +12.653696] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.053781] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058152] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.186373] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.123683] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.267498] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.093512] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.553872] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.061033] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.996135] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.105049] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[Jul29 17:50] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.275633] kauditd_printk_skb: 38 callbacks suppressed
	[ +40.101588] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 17:57] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 18:00] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[  +0.154635] systemd-fstab-generator[3851]: Ignoring "noauto" option for root device
	[  +0.180756] systemd-fstab-generator[3865]: Ignoring "noauto" option for root device
	[  +0.149352] systemd-fstab-generator[3877]: Ignoring "noauto" option for root device
	[  +0.279379] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[  +0.817780] systemd-fstab-generator[4028]: Ignoring "noauto" option for root device
	[  +2.805465] kauditd_printk_skb: 138 callbacks suppressed
	[ +12.140274] kauditd_printk_skb: 81 callbacks suppressed
	[Jul29 18:01] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.872398] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [8da050dd3d84fd7b5418eca925ae183e7d0f42e560d64a9e2cd6ff830ca57e2b] <==
	{"level":"info","ts":"2024-07-29T18:03:22.497911Z","caller":"traceutil/trace.go:171","msg":"trace[393580138] linearizableReadLoop","detail":"{readStateIndex:2985; appliedIndex:2991; }","duration":"240.07486ms","start":"2024-07-29T18:03:22.257821Z","end":"2024-07-29T18:03:22.497896Z","steps":["trace[393580138] 'read index received'  (duration: 240.071686ms)","trace[393580138] 'applied index is now lower than readState.Index'  (duration: 2.618µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T18:03:22.49864Z","caller":"traceutil/trace.go:171","msg":"trace[1344587745] transaction","detail":"{read_only:false; response_revision:2562; number_of_response:1; }","duration":"233.726717ms","start":"2024-07-29T18:03:22.264903Z","end":"2024-07-29T18:03:22.49863Z","steps":["trace[1344587745] 'process raft request'  (duration: 232.549665ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:03:22.499347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.838455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-9bpw9\" ","response":"range_response_count:1 size:1813"}
	{"level":"info","ts":"2024-07-29T18:03:22.499582Z","caller":"traceutil/trace.go:171","msg":"trace[697731326] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-9bpw9; range_end:; response_count:1; response_revision:2563; }","duration":"243.152957ms","start":"2024-07-29T18:03:22.256414Z","end":"2024-07-29T18:03:22.499567Z","steps":["trace[697731326] 'agreement among raft nodes before linearized reading'  (duration: 242.812652ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:03:22.515148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.95404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2024-07-29T18:03:22.515293Z","caller":"traceutil/trace.go:171","msg":"trace[1495190541] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:2569; }","duration":"101.171564ms","start":"2024-07-29T18:03:22.414105Z","end":"2024-07-29T18:03:22.515277Z","steps":["trace[1495190541] 'agreement among raft nodes before linearized reading'  (duration: 100.94927ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:03:22.515605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.079015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-794405-m03\" ","response":"range_response_count:1 size:6042"}
	{"level":"info","ts":"2024-07-29T18:03:22.518286Z","caller":"traceutil/trace.go:171","msg":"trace[1997578639] range","detail":"{range_begin:/registry/minions/ha-794405-m03; range_end:; response_count:1; response_revision:2570; }","duration":"220.785239ms","start":"2024-07-29T18:03:22.297485Z","end":"2024-07-29T18:03:22.51827Z","steps":["trace[1997578639] 'agreement among raft nodes before linearized reading'  (duration: 218.022896ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T18:03:25.58377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe switched to configuration voters=(7751755696543609086 7900380174227738993)"}
	{"level":"info","ts":"2024-07-29T18:03:25.585745Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1cdd3ec65c5f94ba","local-member-id":"6b93c4bc4617b0fe","removed-remote-peer-id":"42cea3ee7cfe51fc","removed-remote-peer-urls":["https://192.168.39.185:2380"]}
	{"level":"info","ts":"2024-07-29T18:03:25.58583Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"warn","ts":"2024-07-29T18:03:25.586404Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T18:03:25.586471Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"warn","ts":"2024-07-29T18:03:25.586826Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T18:03:25.586886Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T18:03:25.58715Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"warn","ts":"2024-07-29T18:03:25.587503Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T18:03:25.587636Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"42cea3ee7cfe51fc","error":"failed to read 42cea3ee7cfe51fc on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T18:03:25.587701Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"warn","ts":"2024-07-29T18:03:25.58788Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T18:03:25.587935Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T18:03:25.587971Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T18:03:25.588086Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6b93c4bc4617b0fe","removed-remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"warn","ts":"2024-07-29T18:03:25.612247Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6b93c4bc4617b0fe","remote-peer-id-stream-handler":"6b93c4bc4617b0fe","remote-peer-id-from":"42cea3ee7cfe51fc"}
	{"level":"warn","ts":"2024-07-29T18:03:25.614302Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6b93c4bc4617b0fe","remote-peer-id-stream-handler":"6b93c4bc4617b0fe","remote-peer-id-from":"42cea3ee7cfe51fc"}
	
	
	==> etcd [e224997d35927a2245c0946f16313307ad55fb6c004535e745af98f59921405e] <==
	{"level":"warn","ts":"2024-07-29T17:59:09.011874Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:59:08.136133Z","time spent":"875.736693ms","remote":"127.0.0.1:56404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:500 "}
	2024/07/29 17:59:09 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T17:59:09.011885Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T17:59:01.305811Z","time spent":"7.706071468s","remote":"127.0.0.1:56028","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" "}
	2024/07/29 17:59:09 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 17:59:09 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T17:59:09.03728Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.102:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T17:59:09.037429Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.102:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T17:59:09.037528Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b93c4bc4617b0fe","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T17:59:09.037681Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.037738Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.037807Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.037929Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.037987Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.038077Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.038106Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6da3c9e913621171"}
	{"level":"info","ts":"2024-07-29T17:59:09.03813Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038176Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038213Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038314Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038459Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038526Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b93c4bc4617b0fe","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.038557Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"42cea3ee7cfe51fc"}
	{"level":"info","ts":"2024-07-29T17:59:09.041703Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-07-29T17:59:09.041842Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-07-29T17:59:09.04188Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-794405","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.102:2380"],"advertise-client-urls":["https://192.168.39.102:2379"]}
	
	
	==> kernel <==
	 18:05:59 up 16 min,  0 users,  load average: 0.12, 0.42, 0.33
	Linux ha-794405 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5005f4869048ef0f06e4eb1b5d7da9e4d2f016398c43e253b17b0445643472b5] <==
	I0729 17:58:45.707700       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:58:45.707724       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:58:45.707991       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:58:45.708099       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 17:58:45.708193       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:58:45.708214       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:58:55.706494       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 17:58:55.706656       1 main.go:299] handling current node
	I0729 17:58:55.706700       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:58:55.706774       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:58:55.706941       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:58:55.706964       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 17:58:55.707026       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:58:55.707044       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 17:59:05.705991       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 17:59:05.706172       1 main.go:299] handling current node
	I0729 17:59:05.706226       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 17:59:05.706249       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 17:59:05.706476       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0729 17:59:05.706507       1 main.go:322] Node ha-794405-m03 has CIDR [10.244.2.0/24] 
	I0729 17:59:05.706591       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 17:59:05.706618       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	E0729 17:59:07.073634       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	W0729 17:59:08.997551       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	E0729 17:59:08.997690       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	
	
	==> kindnet [7dbab85d8958b230048a5f45b40bce6e9b85e71b81ec97b2ea07de656b74bc98] <==
	I0729 18:05:16.765905       1 main.go:299] handling current node
	I0729 18:05:26.755736       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 18:05:26.755789       1 main.go:299] handling current node
	I0729 18:05:26.755804       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 18:05:26.755809       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 18:05:26.755994       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 18:05:26.756020       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 18:05:36.755805       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 18:05:36.755904       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 18:05:36.756146       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 18:05:36.756184       1 main.go:299] handling current node
	I0729 18:05:36.756197       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 18:05:36.756202       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 18:05:46.756504       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 18:05:46.756552       1 main.go:299] handling current node
	I0729 18:05:46.756566       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 18:05:46.756594       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 18:05:46.756752       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 18:05:46.756787       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	I0729 18:05:56.755773       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0729 18:05:56.755832       1 main.go:299] handling current node
	I0729 18:05:56.755851       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0729 18:05:56.755857       1 main.go:322] Node ha-794405-m02 has CIDR [10.244.1.0/24] 
	I0729 18:05:56.756048       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0729 18:05:56.756074       1 main.go:322] Node ha-794405-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [45eb8375f5352d20d949845ff3b0b2c7daa0aae371096814bf64f44c7d52ed79] <==
	I0729 18:01:24.363491       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0729 18:01:24.363527       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 18:01:24.363670       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 18:01:24.443913       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 18:01:24.445081       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 18:01:24.451826       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 18:01:24.451892       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 18:01:24.452594       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 18:01:24.453192       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:01:24.453224       1 policy_source.go:224] refreshing policies
	I0729 18:01:24.453515       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 18:01:24.453603       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 18:01:24.453838       1 shared_informer.go:320] Caches are synced for configmaps
	W0729 18:01:24.461696       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.185]
	I0729 18:01:24.463090       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 18:01:24.464331       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 18:01:24.464507       1 aggregator.go:165] initial CRD sync complete...
	I0729 18:01:24.464549       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 18:01:24.464573       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 18:01:24.464596       1 cache.go:39] Caches are synced for autoregister controller
	I0729 18:01:24.469925       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 18:01:24.473563       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 18:01:24.538804       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 18:01:25.352470       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 18:01:26.106289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.185 192.168.39.62]
	
	
	==> kube-apiserver [b81d1356b5384c04eb260c496317eb5d514e3314aaa5c6e50a23cf4802945475] <==
	I0729 18:00:46.109557       1 options.go:221] external host was not specified, using 192.168.39.102
	I0729 18:00:46.114327       1 server.go:148] Version: v1.30.3
	I0729 18:00:46.117239       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:00:46.860797       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 18:00:46.861628       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:00:46.864995       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 18:00:46.865091       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 18:00:46.865343       1 instance.go:299] Using reconciler: lease
	W0729 18:01:06.857867       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 18:01:06.857906       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 18:01:06.866043       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3b09eb16bdfe9b6222221af56f9412362f12c483a4f77154ecc705b5c7446d76] <==
	I0729 18:04:12.775034       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.728572ms"
	I0729 18:04:12.775413       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.147µs"
	I0729 18:04:16.611617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.58078ms"
	I0729 18:04:16.612134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.721µs"
	E0729 18:04:22.475080       1 gc_controller.go:153] "Failed to get node" err="node \"ha-794405-m03\" not found" logger="pod-garbage-collector-controller" node="ha-794405-m03"
	E0729 18:04:22.475215       1 gc_controller.go:153] "Failed to get node" err="node \"ha-794405-m03\" not found" logger="pod-garbage-collector-controller" node="ha-794405-m03"
	E0729 18:04:22.475255       1 gc_controller.go:153] "Failed to get node" err="node \"ha-794405-m03\" not found" logger="pod-garbage-collector-controller" node="ha-794405-m03"
	E0729 18:04:22.475285       1 gc_controller.go:153] "Failed to get node" err="node \"ha-794405-m03\" not found" logger="pod-garbage-collector-controller" node="ha-794405-m03"
	E0729 18:04:22.475308       1 gc_controller.go:153] "Failed to get node" err="node \"ha-794405-m03\" not found" logger="pod-garbage-collector-controller" node="ha-794405-m03"
	I0729 18:04:22.486905       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-794405-m03"
	I0729 18:04:22.516247       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-794405-m03"
	I0729 18:04:22.516314       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-794405-m03"
	I0729 18:04:22.540324       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-794405-m03"
	I0729 18:04:22.540489       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-g2qqp"
	I0729 18:04:22.568404       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-g2qqp"
	I0729 18:04:22.568532       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-794405-m03"
	I0729 18:04:22.592626       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-794405-m03"
	I0729 18:04:22.592792       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-794405-m03"
	I0729 18:04:22.615575       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-794405-m03"
	I0729 18:04:22.615706       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-ndmlm"
	I0729 18:04:22.641012       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-ndmlm"
	I0729 18:04:22.641082       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-794405-m03"
	I0729 18:04:22.666261       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-794405-m03"
	I0729 18:04:22.937891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.934051ms"
	I0729 18:04:22.938227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.203µs"
	
	
	==> kube-controller-manager [ad32ae050fd042a7e8521c9b70e73bc33fb0877060422ce42bf04b9e2d2810cc] <==
	I0729 18:00:46.970802       1 serving.go:380] Generated self-signed cert in-memory
	I0729 18:00:47.558053       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 18:00:47.558135       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:00:47.560056       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 18:00:47.560205       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 18:00:47.560743       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 18:00:47.560859       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0729 18:01:07.873309       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.102:8443/healthz\": dial tcp 192.168.39.102:8443: connect: connection refused"
	
	
	==> kube-proxy [2992a8242c5e7814d064049bad66f003aaa89848eaa74422ed7c90776fdf849f] <==
	E0729 17:58:02.112048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:05.184528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:05.185580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:05.185522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:05.185637       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:05.185712       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:05.185801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:11.326895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:11.327082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:11.327125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:11.327143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:11.326985       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:11.327192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:20.542748       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:20.542931       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:23.615575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:23.615998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:23.616438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:23.616580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:45.119140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:45.119258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:45.119411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:45.119457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-794405&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 17:58:51.263415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 17:58:51.263586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [3f8c70a5ed56917c765a69bdfe93f88f8dcab61db45495c6865aec4a2e5fa2d0] <==
	I0729 18:00:47.118991       1 server_linux.go:69] "Using iptables proxy"
	E0729 18:00:47.998627       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:00:51.071010       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:00:54.142190       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:01:00.286950       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:01:09.502907       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-794405\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 18:01:25.689553       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	I0729 18:01:25.827983       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:01:25.828073       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:01:25.828092       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:01:25.834062       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:01:25.834855       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:01:25.835194       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:01:25.839591       1 config.go:192] "Starting service config controller"
	I0729 18:01:25.839689       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:01:25.839849       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:01:25.839960       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:01:25.843962       1 config.go:319] "Starting node config controller"
	I0729 18:01:25.844042       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:01:25.940691       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:01:25.941116       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:01:25.944091       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3fc14f09da5ac482d6b2832f18bc5db12bd83b21f6e619f8e9df03c691c6ce88] <==
	W0729 18:01:16.403533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.102:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:16.403597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.102:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:16.561922       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:16.561977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.102:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:16.959428       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.102:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:16.959481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.102:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.035340       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.102:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.035466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.102:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.117643       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.102:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.117747       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.102:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.297826       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.297943       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.102:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.464535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.464602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.752042       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.102:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.752158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.102:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:17.864157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	E0729 18:01:17.864302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.102:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.102:8443: connect: connection refused
	W0729 18:01:24.369616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 18:01:24.370288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 18:01:24.370511       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 18:01:24.370917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 18:01:24.372571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 18:01:24.372663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0729 18:01:25.378802       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fca3429715988ae09489d3d0f512d28babc61b2e2b8a3324612fba1c47839f25] <==
	W0729 17:59:01.143231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 17:59:01.143337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 17:59:01.340223       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 17:59:01.340429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 17:59:01.895552       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 17:59:01.895606       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 17:59:02.017551       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 17:59:02.017621       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 17:59:02.218701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 17:59:02.218757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 17:59:02.357555       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 17:59:02.357602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 17:59:02.373847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 17:59:02.373975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 17:59:02.708542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 17:59:02.708684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 17:59:02.774851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 17:59:02.774904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 17:59:03.101865       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 17:59:03.101979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 17:59:03.203061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 17:59:03.203124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 17:59:03.545103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 17:59:03.545193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 17:59:08.968012       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 18:01:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:01:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:01:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:02:18 ha-794405 kubelet[1375]: I0729 18:02:18.489630    1375 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-794405" podUID="0e782ab8-0d52-4894-b003-493294ab4710"
	Jul 29 18:02:18 ha-794405 kubelet[1375]: I0729 18:02:18.511218    1375 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-794405"
	Jul 29 18:02:47 ha-794405 kubelet[1375]: E0729 18:02:47.515525    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:02:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:02:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:02:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:02:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:03:47 ha-794405 kubelet[1375]: E0729 18:03:47.515809    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:03:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:03:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:03:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:03:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:04:47 ha-794405 kubelet[1375]: E0729 18:04:47.514849    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:04:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:04:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:04:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:04:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:05:47 ha-794405 kubelet[1375]: E0729 18:05:47.515682    1375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:05:47 ha-794405 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:05:47 ha-794405 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:05:47 ha-794405 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:05:47 ha-794405 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:05:58.791921  114275 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19339-88081/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-794405 -n ha-794405
helpers_test.go:261: (dbg) Run:  kubectl --context ha-794405 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (335.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-976328
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-976328
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-976328: exit status 82 (2m1.680452942s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-976328-m03"  ...
	* Stopping node "multinode-976328-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-976328" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-976328 --wait=true -v=8 --alsologtostderr
E0729 18:23:18.906792   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 18:25:53.334694   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 18:26:21.950042   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-976328 --wait=true -v=8 --alsologtostderr: (3m31.249256577s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-976328
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-976328 -n multinode-976328
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-976328 logs -n 25: (1.438474351s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m02:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile737376291/001/cp-test_multinode-976328-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m02:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328:/home/docker/cp-test_multinode-976328-m02_multinode-976328.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n multinode-976328 sudo cat                                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-976328-m02_multinode-976328.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m02:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03:/home/docker/cp-test_multinode-976328-m02_multinode-976328-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n multinode-976328-m03 sudo cat                                   | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-976328-m02_multinode-976328-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp testdata/cp-test.txt                                                | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m03:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile737376291/001/cp-test_multinode-976328-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m03:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328:/home/docker/cp-test_multinode-976328-m03_multinode-976328.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n multinode-976328 sudo cat                                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-976328-m03_multinode-976328.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m03:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m02:/home/docker/cp-test_multinode-976328-m03_multinode-976328-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n multinode-976328-m02 sudo cat                                   | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-976328-m03_multinode-976328-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-976328 node stop m03                                                          | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	| node    | multinode-976328 node start                                                             | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-976328                                                                | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	| stop    | -p multinode-976328                                                                     | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	| start   | -p multinode-976328                                                                     | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:26 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-976328                                                                | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:26 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:22:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:22:57.481581  123843 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:22:57.481722  123843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:57.481733  123843 out.go:304] Setting ErrFile to fd 2...
	I0729 18:22:57.481739  123843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:57.481912  123843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:22:57.482532  123843 out.go:298] Setting JSON to false
	I0729 18:22:57.483457  123843 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11097,"bootTime":1722266280,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:22:57.483525  123843 start.go:139] virtualization: kvm guest
	I0729 18:22:57.486338  123843 out.go:177] * [multinode-976328] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:22:57.487703  123843 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:22:57.487712  123843 notify.go:220] Checking for updates...
	I0729 18:22:57.490419  123843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:22:57.491646  123843 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:22:57.493115  123843 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:22:57.494345  123843 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:22:57.495595  123843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:22:57.497355  123843 config.go:182] Loaded profile config "multinode-976328": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:22:57.497451  123843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:22:57.497862  123843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:57.497922  123843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:57.512757  123843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I0729 18:22:57.513158  123843 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:57.513703  123843 main.go:141] libmachine: Using API Version  1
	I0729 18:22:57.513725  123843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:57.514055  123843 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:57.514212  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:22:57.547831  123843 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:22:57.549092  123843 start.go:297] selected driver: kvm2
	I0729 18:22:57.549116  123843 start.go:901] validating driver "kvm2" against &{Name:multinode-976328 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-976328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:57.549228  123843 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:22:57.549567  123843 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:57.549671  123843 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:22:57.563944  123843 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:22:57.564639  123843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:22:57.564694  123843 cni.go:84] Creating CNI manager for ""
	I0729 18:22:57.564705  123843 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 18:22:57.564762  123843 start.go:340] cluster config:
	{Name:multinode-976328 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-976328 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:57.564934  123843 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:57.566539  123843 out.go:177] * Starting "multinode-976328" primary control-plane node in "multinode-976328" cluster
	I0729 18:22:57.567662  123843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:22:57.567713  123843 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:22:57.567733  123843 cache.go:56] Caching tarball of preloaded images
	I0729 18:22:57.567841  123843 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:22:57.567853  123843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:22:57.568003  123843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/config.json ...
	I0729 18:22:57.568231  123843 start.go:360] acquireMachinesLock for multinode-976328: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:22:57.568277  123843 start.go:364] duration metric: took 26.231µs to acquireMachinesLock for "multinode-976328"
	I0729 18:22:57.568298  123843 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:22:57.568308  123843 fix.go:54] fixHost starting: 
	I0729 18:22:57.568615  123843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:57.568652  123843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:57.582692  123843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0729 18:22:57.583075  123843 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:57.583527  123843 main.go:141] libmachine: Using API Version  1
	I0729 18:22:57.583551  123843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:57.583891  123843 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:57.584088  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:22:57.584283  123843 main.go:141] libmachine: (multinode-976328) Calling .GetState
	I0729 18:22:57.585875  123843 fix.go:112] recreateIfNeeded on multinode-976328: state=Running err=<nil>
	W0729 18:22:57.585891  123843 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:22:57.588411  123843 out.go:177] * Updating the running kvm2 "multinode-976328" VM ...
	I0729 18:22:57.589883  123843 machine.go:94] provisionDockerMachine start ...
	I0729 18:22:57.589907  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:22:57.590114  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:57.592474  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.592923  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:57.592947  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.593091  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:22:57.593270  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.593426  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.593560  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:22:57.593730  123843 main.go:141] libmachine: Using SSH client type: native
	I0729 18:22:57.593927  123843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0729 18:22:57.593939  123843 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:22:57.698274  123843 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-976328
	
	I0729 18:22:57.698332  123843 main.go:141] libmachine: (multinode-976328) Calling .GetMachineName
	I0729 18:22:57.698624  123843 buildroot.go:166] provisioning hostname "multinode-976328"
	I0729 18:22:57.698650  123843 main.go:141] libmachine: (multinode-976328) Calling .GetMachineName
	I0729 18:22:57.698875  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:57.701474  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.701810  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:57.701841  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.702042  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:22:57.702228  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.702400  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.702561  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:22:57.702707  123843 main.go:141] libmachine: Using SSH client type: native
	I0729 18:22:57.702897  123843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0729 18:22:57.702915  123843 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-976328 && echo "multinode-976328" | sudo tee /etc/hostname
	I0729 18:22:57.826863  123843 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-976328
	
	I0729 18:22:57.826896  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:57.829596  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.830002  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:57.830038  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.830231  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:22:57.830419  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.830598  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.830722  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:22:57.830919  123843 main.go:141] libmachine: Using SSH client type: native
	I0729 18:22:57.831135  123843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0729 18:22:57.831159  123843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-976328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-976328/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-976328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:22:57.937684  123843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:22:57.937724  123843 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 18:22:57.937748  123843 buildroot.go:174] setting up certificates
	I0729 18:22:57.937756  123843 provision.go:84] configureAuth start
	I0729 18:22:57.937765  123843 main.go:141] libmachine: (multinode-976328) Calling .GetMachineName
	I0729 18:22:57.938027  123843 main.go:141] libmachine: (multinode-976328) Calling .GetIP
	I0729 18:22:57.940568  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.940977  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:57.941012  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.941191  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:57.943603  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.943903  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:57.943953  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.944053  123843 provision.go:143] copyHostCerts
	I0729 18:22:57.944085  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:22:57.944114  123843 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 18:22:57.944122  123843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:22:57.944188  123843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 18:22:57.944260  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:22:57.944279  123843 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 18:22:57.944283  123843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:22:57.944308  123843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 18:22:57.944392  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:22:57.944412  123843 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 18:22:57.944416  123843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:22:57.944440  123843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 18:22:57.944483  123843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.multinode-976328 san=[127.0.0.1 192.168.39.211 localhost minikube multinode-976328]
	I0729 18:22:58.035014  123843 provision.go:177] copyRemoteCerts
	I0729 18:22:58.035092  123843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:22:58.035118  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:58.037768  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:58.038184  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:58.038216  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:58.038369  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:22:58.038567  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:58.038708  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:22:58.038868  123843 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/multinode-976328/id_rsa Username:docker}
	I0729 18:22:58.119217  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 18:22:58.119291  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:22:58.144225  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 18:22:58.144282  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 18:22:58.168526  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 18:22:58.168588  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:22:58.192738  123843 provision.go:87] duration metric: took 254.967854ms to configureAuth
	I0729 18:22:58.192764  123843 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:22:58.193055  123843 config.go:182] Loaded profile config "multinode-976328": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:22:58.193153  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:58.195804  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:58.196236  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:58.196261  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:58.196460  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:22:58.196661  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:58.196814  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:58.196962  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:22:58.197093  123843 main.go:141] libmachine: Using SSH client type: native
	I0729 18:22:58.197260  123843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0729 18:22:58.197274  123843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:24:29.024042  123843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:24:29.024105  123843 machine.go:97] duration metric: took 1m31.434198984s to provisionDockerMachine
	I0729 18:24:29.024124  123843 start.go:293] postStartSetup for "multinode-976328" (driver="kvm2")
	I0729 18:24:29.024139  123843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:24:29.024165  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:24:29.024541  123843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:24:29.024583  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:24:29.027993  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.028432  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:29.028454  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.028615  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:24:29.028793  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:24:29.029035  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:24:29.029217  123843 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/multinode-976328/id_rsa Username:docker}
	I0729 18:24:29.112235  123843 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:24:29.116688  123843 command_runner.go:130] > NAME=Buildroot
	I0729 18:24:29.116713  123843 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 18:24:29.116719  123843 command_runner.go:130] > ID=buildroot
	I0729 18:24:29.116726  123843 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 18:24:29.116732  123843 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 18:24:29.116767  123843 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:24:29.116785  123843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:24:29.116886  123843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:24:29.116985  123843 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:24:29.117001  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /etc/ssl/certs/952822.pem
	I0729 18:24:29.117115  123843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:24:29.126517  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:24:29.149882  123843 start.go:296] duration metric: took 125.741298ms for postStartSetup
	I0729 18:24:29.149940  123843 fix.go:56] duration metric: took 1m31.58163236s for fixHost
	I0729 18:24:29.149972  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:24:29.152689  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.153004  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:29.153033  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.153161  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:24:29.153357  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:24:29.153541  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:24:29.153685  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:24:29.153893  123843 main.go:141] libmachine: Using SSH client type: native
	I0729 18:24:29.154077  123843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0729 18:24:29.154092  123843 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:24:29.257458  123843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277469.232257137
	
	I0729 18:24:29.257489  123843 fix.go:216] guest clock: 1722277469.232257137
	I0729 18:24:29.257500  123843 fix.go:229] Guest: 2024-07-29 18:24:29.232257137 +0000 UTC Remote: 2024-07-29 18:24:29.149949853 +0000 UTC m=+91.704777228 (delta=82.307284ms)
	I0729 18:24:29.257562  123843 fix.go:200] guest clock delta is within tolerance: 82.307284ms
	I0729 18:24:29.257574  123843 start.go:83] releasing machines lock for "multinode-976328", held for 1m31.689283817s
	I0729 18:24:29.257627  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:24:29.257908  123843 main.go:141] libmachine: (multinode-976328) Calling .GetIP
	I0729 18:24:29.260505  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.260886  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:29.260915  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.261069  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:24:29.261556  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:24:29.261765  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:24:29.261882  123843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:24:29.261931  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:24:29.261992  123843 ssh_runner.go:195] Run: cat /version.json
	I0729 18:24:29.262013  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:24:29.264582  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.264942  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.265000  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:29.265026  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.265156  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:24:29.265371  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:24:29.265434  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:29.265461  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.265593  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:24:29.265627  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:24:29.265756  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:24:29.265762  123843 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/multinode-976328/id_rsa Username:docker}
	I0729 18:24:29.265912  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:24:29.266065  123843 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/multinode-976328/id_rsa Username:docker}
	I0729 18:24:29.360987  123843 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 18:24:29.361587  123843 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0729 18:24:29.361764  123843 ssh_runner.go:195] Run: systemctl --version
	I0729 18:24:29.367349  123843 command_runner.go:130] > systemd 252 (252)
	I0729 18:24:29.367384  123843 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 18:24:29.367441  123843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:24:29.536713  123843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 18:24:29.551746  123843 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 18:24:29.551820  123843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:24:29.551899  123843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:24:29.562335  123843 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 18:24:29.562361  123843 start.go:495] detecting cgroup driver to use...
	I0729 18:24:29.562419  123843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:24:29.581046  123843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:24:29.600639  123843 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:24:29.600715  123843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:24:29.619374  123843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:24:29.641249  123843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:24:29.790698  123843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:24:29.932104  123843 docker.go:233] disabling docker service ...
	I0729 18:24:29.932187  123843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:24:29.949496  123843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:24:29.962678  123843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:24:30.103174  123843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:24:30.246717  123843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:24:30.261295  123843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:24:30.279497  123843 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 18:24:30.279547  123843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:24:30.279592  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.290159  123843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:24:30.290230  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.300570  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.311042  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.321336  123843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:24:30.332747  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.343166  123843 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.354270  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.364877  123843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:24:30.374082  123843 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 18:24:30.374137  123843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:24:30.383443  123843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:24:30.522540  123843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:24:34.871239  123843 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.348655457s)
	I0729 18:24:34.871274  123843 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:24:34.871331  123843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:24:34.876043  123843 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 18:24:34.876070  123843 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 18:24:34.876081  123843 command_runner.go:130] > Device: 0,22	Inode: 1355        Links: 1
	I0729 18:24:34.876091  123843 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 18:24:34.876102  123843 command_runner.go:130] > Access: 2024-07-29 18:24:34.734267717 +0000
	I0729 18:24:34.876109  123843 command_runner.go:130] > Modify: 2024-07-29 18:24:34.734267717 +0000
	I0729 18:24:34.876116  123843 command_runner.go:130] > Change: 2024-07-29 18:24:34.734267717 +0000
	I0729 18:24:34.876120  123843 command_runner.go:130] >  Birth: -
	I0729 18:24:34.876139  123843 start.go:563] Will wait 60s for crictl version
	I0729 18:24:34.876179  123843 ssh_runner.go:195] Run: which crictl
	I0729 18:24:34.879756  123843 command_runner.go:130] > /usr/bin/crictl
	I0729 18:24:34.879826  123843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:24:34.916240  123843 command_runner.go:130] > Version:  0.1.0
	I0729 18:24:34.916265  123843 command_runner.go:130] > RuntimeName:  cri-o
	I0729 18:24:34.916273  123843 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 18:24:34.916281  123843 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 18:24:34.916302  123843 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:24:34.916382  123843 ssh_runner.go:195] Run: crio --version
	I0729 18:24:34.947447  123843 command_runner.go:130] > crio version 1.29.1
	I0729 18:24:34.947476  123843 command_runner.go:130] > Version:        1.29.1
	I0729 18:24:34.947484  123843 command_runner.go:130] > GitCommit:      unknown
	I0729 18:24:34.947490  123843 command_runner.go:130] > GitCommitDate:  unknown
	I0729 18:24:34.947497  123843 command_runner.go:130] > GitTreeState:   clean
	I0729 18:24:34.947506  123843 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0729 18:24:34.947513  123843 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 18:24:34.947518  123843 command_runner.go:130] > Compiler:       gc
	I0729 18:24:34.947525  123843 command_runner.go:130] > Platform:       linux/amd64
	I0729 18:24:34.947535  123843 command_runner.go:130] > Linkmode:       dynamic
	I0729 18:24:34.947540  123843 command_runner.go:130] > BuildTags:      
	I0729 18:24:34.947545  123843 command_runner.go:130] >   containers_image_ostree_stub
	I0729 18:24:34.947552  123843 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 18:24:34.947556  123843 command_runner.go:130] >   btrfs_noversion
	I0729 18:24:34.947575  123843 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 18:24:34.947580  123843 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 18:24:34.947583  123843 command_runner.go:130] >   seccomp
	I0729 18:24:34.947587  123843 command_runner.go:130] > LDFlags:          unknown
	I0729 18:24:34.947590  123843 command_runner.go:130] > SeccompEnabled:   true
	I0729 18:24:34.947594  123843 command_runner.go:130] > AppArmorEnabled:  false
	I0729 18:24:34.947863  123843 ssh_runner.go:195] Run: crio --version
	I0729 18:24:34.974270  123843 command_runner.go:130] > crio version 1.29.1
	I0729 18:24:34.974293  123843 command_runner.go:130] > Version:        1.29.1
	I0729 18:24:34.974331  123843 command_runner.go:130] > GitCommit:      unknown
	I0729 18:24:34.974339  123843 command_runner.go:130] > GitCommitDate:  unknown
	I0729 18:24:34.974345  123843 command_runner.go:130] > GitTreeState:   clean
	I0729 18:24:34.974352  123843 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0729 18:24:34.974357  123843 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 18:24:34.974361  123843 command_runner.go:130] > Compiler:       gc
	I0729 18:24:34.974365  123843 command_runner.go:130] > Platform:       linux/amd64
	I0729 18:24:34.974372  123843 command_runner.go:130] > Linkmode:       dynamic
	I0729 18:24:34.974377  123843 command_runner.go:130] > BuildTags:      
	I0729 18:24:34.974384  123843 command_runner.go:130] >   containers_image_ostree_stub
	I0729 18:24:34.974389  123843 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 18:24:34.974400  123843 command_runner.go:130] >   btrfs_noversion
	I0729 18:24:34.974408  123843 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 18:24:34.974415  123843 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 18:24:34.974424  123843 command_runner.go:130] >   seccomp
	I0729 18:24:34.974434  123843 command_runner.go:130] > LDFlags:          unknown
	I0729 18:24:34.974442  123843 command_runner.go:130] > SeccompEnabled:   true
	I0729 18:24:34.974449  123843 command_runner.go:130] > AppArmorEnabled:  false
	I0729 18:24:34.977557  123843 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:24:34.978912  123843 main.go:141] libmachine: (multinode-976328) Calling .GetIP
	I0729 18:24:34.981831  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:34.982267  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:34.982295  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:34.982473  123843 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:24:34.986696  123843 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 18:24:34.986773  123843 kubeadm.go:883] updating cluster {Name:multinode-976328 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-976328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:24:34.986892  123843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:24:34.986938  123843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:24:35.033624  123843 command_runner.go:130] > {
	I0729 18:24:35.033650  123843 command_runner.go:130] >   "images": [
	I0729 18:24:35.033654  123843 command_runner.go:130] >     {
	I0729 18:24:35.033663  123843 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 18:24:35.033668  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.033673  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 18:24:35.033677  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033681  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.033688  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 18:24:35.033696  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 18:24:35.033701  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033706  123843 command_runner.go:130] >       "size": "87165492",
	I0729 18:24:35.033712  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.033716  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.033725  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.033731  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.033734  123843 command_runner.go:130] >     },
	I0729 18:24:35.033738  123843 command_runner.go:130] >     {
	I0729 18:24:35.033743  123843 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 18:24:35.033748  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.033753  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 18:24:35.033771  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033778  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.033784  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 18:24:35.033792  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 18:24:35.033795  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033804  123843 command_runner.go:130] >       "size": "87174707",
	I0729 18:24:35.033810  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.033817  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.033821  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.033825  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.033829  123843 command_runner.go:130] >     },
	I0729 18:24:35.033832  123843 command_runner.go:130] >     {
	I0729 18:24:35.033838  123843 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 18:24:35.033842  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.033847  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 18:24:35.033850  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033854  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.033863  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 18:24:35.033870  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 18:24:35.033874  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033878  123843 command_runner.go:130] >       "size": "1363676",
	I0729 18:24:35.033881  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.033885  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.033889  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.033893  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.033896  123843 command_runner.go:130] >     },
	I0729 18:24:35.033899  123843 command_runner.go:130] >     {
	I0729 18:24:35.033905  123843 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 18:24:35.033909  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.033914  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 18:24:35.033917  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033921  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.033928  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 18:24:35.033945  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 18:24:35.033950  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033954  123843 command_runner.go:130] >       "size": "31470524",
	I0729 18:24:35.033962  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.033968  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.033972  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.033976  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.033979  123843 command_runner.go:130] >     },
	I0729 18:24:35.033982  123843 command_runner.go:130] >     {
	I0729 18:24:35.033988  123843 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 18:24:35.033995  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034000  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 18:24:35.034006  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034009  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034016  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 18:24:35.034025  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 18:24:35.034040  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034047  123843 command_runner.go:130] >       "size": "61245718",
	I0729 18:24:35.034050  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.034055  123843 command_runner.go:130] >       "username": "nonroot",
	I0729 18:24:35.034059  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034063  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034066  123843 command_runner.go:130] >     },
	I0729 18:24:35.034070  123843 command_runner.go:130] >     {
	I0729 18:24:35.034076  123843 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 18:24:35.034082  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034086  123843 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 18:24:35.034090  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034093  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034100  123843 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 18:24:35.034109  123843 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 18:24:35.034112  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034117  123843 command_runner.go:130] >       "size": "150779692",
	I0729 18:24:35.034121  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.034125  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.034131  123843 command_runner.go:130] >       },
	I0729 18:24:35.034134  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034138  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034142  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034149  123843 command_runner.go:130] >     },
	I0729 18:24:35.034155  123843 command_runner.go:130] >     {
	I0729 18:24:35.034161  123843 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 18:24:35.034167  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034176  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 18:24:35.034182  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034186  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034193  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 18:24:35.034203  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 18:24:35.034207  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034213  123843 command_runner.go:130] >       "size": "117609954",
	I0729 18:24:35.034217  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.034223  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.034226  123843 command_runner.go:130] >       },
	I0729 18:24:35.034230  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034234  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034238  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034241  123843 command_runner.go:130] >     },
	I0729 18:24:35.034244  123843 command_runner.go:130] >     {
	I0729 18:24:35.034250  123843 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 18:24:35.034255  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034260  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 18:24:35.034265  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034269  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034289  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 18:24:35.034298  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 18:24:35.034302  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034306  123843 command_runner.go:130] >       "size": "112198984",
	I0729 18:24:35.034311  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.034315  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.034318  123843 command_runner.go:130] >       },
	I0729 18:24:35.034322  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034325  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034328  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034331  123843 command_runner.go:130] >     },
	I0729 18:24:35.034334  123843 command_runner.go:130] >     {
	I0729 18:24:35.034345  123843 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 18:24:35.034349  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034354  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 18:24:35.034357  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034361  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034370  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 18:24:35.034376  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 18:24:35.034379  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034383  123843 command_runner.go:130] >       "size": "85953945",
	I0729 18:24:35.034386  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.034390  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034393  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034397  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034400  123843 command_runner.go:130] >     },
	I0729 18:24:35.034403  123843 command_runner.go:130] >     {
	I0729 18:24:35.034409  123843 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 18:24:35.034413  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034418  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 18:24:35.034424  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034428  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034436  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 18:24:35.034445  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 18:24:35.034449  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034453  123843 command_runner.go:130] >       "size": "63051080",
	I0729 18:24:35.034458  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.034462  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.034465  123843 command_runner.go:130] >       },
	I0729 18:24:35.034469  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034473  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034478  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034482  123843 command_runner.go:130] >     },
	I0729 18:24:35.034485  123843 command_runner.go:130] >     {
	I0729 18:24:35.034499  123843 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 18:24:35.034502  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034509  123843 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 18:24:35.034512  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034521  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034530  123843 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 18:24:35.034537  123843 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 18:24:35.034543  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034546  123843 command_runner.go:130] >       "size": "750414",
	I0729 18:24:35.034558  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.034561  123843 command_runner.go:130] >         "value": "65535"
	I0729 18:24:35.034565  123843 command_runner.go:130] >       },
	I0729 18:24:35.034569  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034574  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034578  123843 command_runner.go:130] >       "pinned": true
	I0729 18:24:35.034581  123843 command_runner.go:130] >     }
	I0729 18:24:35.034587  123843 command_runner.go:130] >   ]
	I0729 18:24:35.034594  123843 command_runner.go:130] > }
	I0729 18:24:35.034951  123843 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:24:35.034966  123843 crio.go:433] Images already preloaded, skipping extraction
	I0729 18:24:35.035017  123843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:24:35.067287  123843 command_runner.go:130] > {
	I0729 18:24:35.067309  123843 command_runner.go:130] >   "images": [
	I0729 18:24:35.067313  123843 command_runner.go:130] >     {
	I0729 18:24:35.067325  123843 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 18:24:35.067330  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067344  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 18:24:35.067351  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067355  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067365  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 18:24:35.067373  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 18:24:35.067377  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067382  123843 command_runner.go:130] >       "size": "87165492",
	I0729 18:24:35.067386  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.067393  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067398  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067402  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067407  123843 command_runner.go:130] >     },
	I0729 18:24:35.067411  123843 command_runner.go:130] >     {
	I0729 18:24:35.067416  123843 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 18:24:35.067423  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067428  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 18:24:35.067432  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067436  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067442  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 18:24:35.067451  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 18:24:35.067455  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067459  123843 command_runner.go:130] >       "size": "87174707",
	I0729 18:24:35.067464  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.067470  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067475  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067478  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067481  123843 command_runner.go:130] >     },
	I0729 18:24:35.067485  123843 command_runner.go:130] >     {
	I0729 18:24:35.067491  123843 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 18:24:35.067496  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067500  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 18:24:35.067504  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067508  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067518  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 18:24:35.067527  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 18:24:35.067531  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067542  123843 command_runner.go:130] >       "size": "1363676",
	I0729 18:24:35.067549  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.067557  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067565  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067572  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067575  123843 command_runner.go:130] >     },
	I0729 18:24:35.067579  123843 command_runner.go:130] >     {
	I0729 18:24:35.067585  123843 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 18:24:35.067591  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067596  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 18:24:35.067602  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067606  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067616  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 18:24:35.067632  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 18:24:35.067638  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067642  123843 command_runner.go:130] >       "size": "31470524",
	I0729 18:24:35.067648  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.067652  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067658  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067661  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067667  123843 command_runner.go:130] >     },
	I0729 18:24:35.067670  123843 command_runner.go:130] >     {
	I0729 18:24:35.067678  123843 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 18:24:35.067685  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067690  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 18:24:35.067696  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067700  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067709  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 18:24:35.067718  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 18:24:35.067723  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067727  123843 command_runner.go:130] >       "size": "61245718",
	I0729 18:24:35.067733  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.067738  123843 command_runner.go:130] >       "username": "nonroot",
	I0729 18:24:35.067744  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067747  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067753  123843 command_runner.go:130] >     },
	I0729 18:24:35.067760  123843 command_runner.go:130] >     {
	I0729 18:24:35.067768  123843 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 18:24:35.067775  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067780  123843 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 18:24:35.067786  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067789  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067805  123843 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 18:24:35.067813  123843 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 18:24:35.067819  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067822  123843 command_runner.go:130] >       "size": "150779692",
	I0729 18:24:35.067829  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.067832  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.067839  123843 command_runner.go:130] >       },
	I0729 18:24:35.067846  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067850  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067855  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067859  123843 command_runner.go:130] >     },
	I0729 18:24:35.067864  123843 command_runner.go:130] >     {
	I0729 18:24:35.067870  123843 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 18:24:35.067876  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067881  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 18:24:35.067886  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067891  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067900  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 18:24:35.067914  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 18:24:35.067919  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067924  123843 command_runner.go:130] >       "size": "117609954",
	I0729 18:24:35.067930  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.067934  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.067937  123843 command_runner.go:130] >       },
	I0729 18:24:35.067943  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067947  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067953  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067956  123843 command_runner.go:130] >     },
	I0729 18:24:35.067961  123843 command_runner.go:130] >     {
	I0729 18:24:35.067967  123843 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 18:24:35.067979  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067986  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 18:24:35.067992  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067996  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.068019  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 18:24:35.068029  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 18:24:35.068036  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068039  123843 command_runner.go:130] >       "size": "112198984",
	I0729 18:24:35.068045  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.068049  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.068054  123843 command_runner.go:130] >       },
	I0729 18:24:35.068058  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.068064  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.068068  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.068073  123843 command_runner.go:130] >     },
	I0729 18:24:35.068077  123843 command_runner.go:130] >     {
	I0729 18:24:35.068085  123843 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 18:24:35.068090  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.068094  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 18:24:35.068099  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068104  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.068112  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 18:24:35.068123  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 18:24:35.068129  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068133  123843 command_runner.go:130] >       "size": "85953945",
	I0729 18:24:35.068139  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.068143  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.068148  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.068152  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.068157  123843 command_runner.go:130] >     },
	I0729 18:24:35.068161  123843 command_runner.go:130] >     {
	I0729 18:24:35.068169  123843 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 18:24:35.068175  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.068180  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 18:24:35.068186  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068189  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.068202  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 18:24:35.068212  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 18:24:35.068216  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068222  123843 command_runner.go:130] >       "size": "63051080",
	I0729 18:24:35.068226  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.068232  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.068235  123843 command_runner.go:130] >       },
	I0729 18:24:35.068242  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.068245  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.068251  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.068254  123843 command_runner.go:130] >     },
	I0729 18:24:35.068258  123843 command_runner.go:130] >     {
	I0729 18:24:35.068263  123843 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 18:24:35.068269  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.068274  123843 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 18:24:35.068279  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068283  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.068291  123843 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 18:24:35.068300  123843 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 18:24:35.068305  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068309  123843 command_runner.go:130] >       "size": "750414",
	I0729 18:24:35.068312  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.068317  123843 command_runner.go:130] >         "value": "65535"
	I0729 18:24:35.068325  123843 command_runner.go:130] >       },
	I0729 18:24:35.068331  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.068336  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.068341  123843 command_runner.go:130] >       "pinned": true
	I0729 18:24:35.068344  123843 command_runner.go:130] >     }
	I0729 18:24:35.068348  123843 command_runner.go:130] >   ]
	I0729 18:24:35.068351  123843 command_runner.go:130] > }
	I0729 18:24:35.068797  123843 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:24:35.068817  123843 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:24:35.068826  123843 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.30.3 crio true true} ...
	I0729 18:24:35.068937  123843 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-976328 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-976328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:24:35.069007  123843 ssh_runner.go:195] Run: crio config
	I0729 18:24:35.102694  123843 command_runner.go:130] ! time="2024-07-29 18:24:35.077156228Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 18:24:35.107891  123843 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 18:24:35.120804  123843 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 18:24:35.120830  123843 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 18:24:35.120840  123843 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 18:24:35.120845  123843 command_runner.go:130] > #
	I0729 18:24:35.120876  123843 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 18:24:35.120889  123843 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 18:24:35.120902  123843 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 18:24:35.120924  123843 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 18:24:35.120932  123843 command_runner.go:130] > # reload'.
	I0729 18:24:35.120945  123843 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 18:24:35.120958  123843 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 18:24:35.120969  123843 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 18:24:35.120975  123843 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 18:24:35.120980  123843 command_runner.go:130] > [crio]
	I0729 18:24:35.120987  123843 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 18:24:35.120995  123843 command_runner.go:130] > # containers images, in this directory.
	I0729 18:24:35.121001  123843 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 18:24:35.121013  123843 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 18:24:35.121020  123843 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 18:24:35.121027  123843 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 18:24:35.121033  123843 command_runner.go:130] > # imagestore = ""
	I0729 18:24:35.121039  123843 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 18:24:35.121046  123843 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 18:24:35.121053  123843 command_runner.go:130] > storage_driver = "overlay"
	I0729 18:24:35.121058  123843 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 18:24:35.121066  123843 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 18:24:35.121073  123843 command_runner.go:130] > storage_option = [
	I0729 18:24:35.121080  123843 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 18:24:35.121083  123843 command_runner.go:130] > ]
	I0729 18:24:35.121089  123843 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 18:24:35.121099  123843 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 18:24:35.121109  123843 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 18:24:35.121120  123843 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 18:24:35.121131  123843 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 18:24:35.121139  123843 command_runner.go:130] > # always happen on a node reboot
	I0729 18:24:35.121149  123843 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 18:24:35.121169  123843 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 18:24:35.121181  123843 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 18:24:35.121190  123843 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 18:24:35.121199  123843 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 18:24:35.121212  123843 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 18:24:35.121225  123843 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 18:24:35.121234  123843 command_runner.go:130] > # internal_wipe = true
	I0729 18:24:35.121254  123843 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 18:24:35.121266  123843 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 18:24:35.121275  123843 command_runner.go:130] > # internal_repair = false
	I0729 18:24:35.121286  123843 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 18:24:35.121297  123843 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 18:24:35.121309  123843 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 18:24:35.121319  123843 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 18:24:35.121330  123843 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 18:24:35.121338  123843 command_runner.go:130] > [crio.api]
	I0729 18:24:35.121349  123843 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 18:24:35.121360  123843 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 18:24:35.121370  123843 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 18:24:35.121378  123843 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 18:24:35.121390  123843 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 18:24:35.121400  123843 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 18:24:35.121410  123843 command_runner.go:130] > # stream_port = "0"
	I0729 18:24:35.121420  123843 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 18:24:35.121428  123843 command_runner.go:130] > # stream_enable_tls = false
	I0729 18:24:35.121436  123843 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 18:24:35.121446  123843 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 18:24:35.121470  123843 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 18:24:35.121483  123843 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 18:24:35.121492  123843 command_runner.go:130] > # minutes.
	I0729 18:24:35.121500  123843 command_runner.go:130] > # stream_tls_cert = ""
	I0729 18:24:35.121512  123843 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 18:24:35.121524  123843 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 18:24:35.121533  123843 command_runner.go:130] > # stream_tls_key = ""
	I0729 18:24:35.121546  123843 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 18:24:35.121563  123843 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 18:24:35.121606  123843 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 18:24:35.121612  123843 command_runner.go:130] > # stream_tls_ca = ""
	I0729 18:24:35.121619  123843 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 18:24:35.121625  123843 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 18:24:35.121632  123843 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 18:24:35.121638  123843 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 18:24:35.121645  123843 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 18:24:35.121657  123843 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 18:24:35.121663  123843 command_runner.go:130] > [crio.runtime]
	I0729 18:24:35.121669  123843 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 18:24:35.121676  123843 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 18:24:35.121680  123843 command_runner.go:130] > # "nofile=1024:2048"
	I0729 18:24:35.121687  123843 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 18:24:35.121693  123843 command_runner.go:130] > # default_ulimits = [
	I0729 18:24:35.121696  123843 command_runner.go:130] > # ]
	I0729 18:24:35.121704  123843 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 18:24:35.121710  123843 command_runner.go:130] > # no_pivot = false
	I0729 18:24:35.121716  123843 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 18:24:35.121724  123843 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 18:24:35.121731  123843 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 18:24:35.121738  123843 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 18:24:35.121746  123843 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 18:24:35.121752  123843 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 18:24:35.121758  123843 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 18:24:35.121762  123843 command_runner.go:130] > # Cgroup setting for conmon
	I0729 18:24:35.121771  123843 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 18:24:35.121775  123843 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 18:24:35.121781  123843 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 18:24:35.121788  123843 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 18:24:35.121796  123843 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 18:24:35.121803  123843 command_runner.go:130] > conmon_env = [
	I0729 18:24:35.121809  123843 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 18:24:35.121814  123843 command_runner.go:130] > ]
	I0729 18:24:35.121819  123843 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 18:24:35.121826  123843 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 18:24:35.121831  123843 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 18:24:35.121837  123843 command_runner.go:130] > # default_env = [
	I0729 18:24:35.121840  123843 command_runner.go:130] > # ]
	I0729 18:24:35.121847  123843 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 18:24:35.121855  123843 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 18:24:35.121861  123843 command_runner.go:130] > # selinux = false
	I0729 18:24:35.121867  123843 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 18:24:35.121875  123843 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 18:24:35.121885  123843 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 18:24:35.121891  123843 command_runner.go:130] > # seccomp_profile = ""
	I0729 18:24:35.121897  123843 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 18:24:35.121904  123843 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 18:24:35.121909  123843 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 18:24:35.121916  123843 command_runner.go:130] > # which might increase security.
	I0729 18:24:35.121920  123843 command_runner.go:130] > # This option is currently deprecated,
	I0729 18:24:35.121927  123843 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 18:24:35.121932  123843 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 18:24:35.121939  123843 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 18:24:35.121948  123843 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 18:24:35.121957  123843 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 18:24:35.121964  123843 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 18:24:35.121970  123843 command_runner.go:130] > # This option supports live configuration reload.
	I0729 18:24:35.121974  123843 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 18:24:35.121980  123843 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 18:24:35.121986  123843 command_runner.go:130] > # the cgroup blockio controller.
	I0729 18:24:35.121991  123843 command_runner.go:130] > # blockio_config_file = ""
	I0729 18:24:35.121999  123843 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 18:24:35.122005  123843 command_runner.go:130] > # blockio parameters.
	I0729 18:24:35.122009  123843 command_runner.go:130] > # blockio_reload = false
	I0729 18:24:35.122017  123843 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 18:24:35.122021  123843 command_runner.go:130] > # irqbalance daemon.
	I0729 18:24:35.122026  123843 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 18:24:35.122036  123843 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 18:24:35.122045  123843 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 18:24:35.122053  123843 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 18:24:35.122059  123843 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 18:24:35.122067  123843 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 18:24:35.122073  123843 command_runner.go:130] > # This option supports live configuration reload.
	I0729 18:24:35.122079  123843 command_runner.go:130] > # rdt_config_file = ""
	I0729 18:24:35.122084  123843 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 18:24:35.122090  123843 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 18:24:35.122123  123843 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 18:24:35.122129  123843 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 18:24:35.122135  123843 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 18:24:35.122147  123843 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 18:24:35.122153  123843 command_runner.go:130] > # will be added.
	I0729 18:24:35.122158  123843 command_runner.go:130] > # default_capabilities = [
	I0729 18:24:35.122163  123843 command_runner.go:130] > # 	"CHOWN",
	I0729 18:24:35.122167  123843 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 18:24:35.122172  123843 command_runner.go:130] > # 	"FSETID",
	I0729 18:24:35.122176  123843 command_runner.go:130] > # 	"FOWNER",
	I0729 18:24:35.122181  123843 command_runner.go:130] > # 	"SETGID",
	I0729 18:24:35.122185  123843 command_runner.go:130] > # 	"SETUID",
	I0729 18:24:35.122191  123843 command_runner.go:130] > # 	"SETPCAP",
	I0729 18:24:35.122194  123843 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 18:24:35.122200  123843 command_runner.go:130] > # 	"KILL",
	I0729 18:24:35.122203  123843 command_runner.go:130] > # ]
	I0729 18:24:35.122211  123843 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 18:24:35.122219  123843 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 18:24:35.122225  123843 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 18:24:35.122231  123843 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 18:24:35.122238  123843 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 18:24:35.122241  123843 command_runner.go:130] > default_sysctls = [
	I0729 18:24:35.122248  123843 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 18:24:35.122251  123843 command_runner.go:130] > ]
	I0729 18:24:35.122256  123843 command_runner.go:130] > # List of devices on the host that a
	I0729 18:24:35.122264  123843 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 18:24:35.122270  123843 command_runner.go:130] > # allowed_devices = [
	I0729 18:24:35.122273  123843 command_runner.go:130] > # 	"/dev/fuse",
	I0729 18:24:35.122278  123843 command_runner.go:130] > # ]
	I0729 18:24:35.122283  123843 command_runner.go:130] > # List of additional devices. specified as
	I0729 18:24:35.122296  123843 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 18:24:35.122304  123843 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 18:24:35.122311  123843 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 18:24:35.122317  123843 command_runner.go:130] > # additional_devices = [
	I0729 18:24:35.122320  123843 command_runner.go:130] > # ]
	I0729 18:24:35.122327  123843 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 18:24:35.122331  123843 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 18:24:35.122334  123843 command_runner.go:130] > # 	"/etc/cdi",
	I0729 18:24:35.122342  123843 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 18:24:35.122352  123843 command_runner.go:130] > # ]
	I0729 18:24:35.122364  123843 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 18:24:35.122376  123843 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 18:24:35.122384  123843 command_runner.go:130] > # Defaults to false.
	I0729 18:24:35.122394  123843 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 18:24:35.122406  123843 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 18:24:35.122417  123843 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 18:24:35.122426  123843 command_runner.go:130] > # hooks_dir = [
	I0729 18:24:35.122435  123843 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 18:24:35.122441  123843 command_runner.go:130] > # ]
	I0729 18:24:35.122446  123843 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 18:24:35.122454  123843 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 18:24:35.122461  123843 command_runner.go:130] > # its default mounts from the following two files:
	I0729 18:24:35.122465  123843 command_runner.go:130] > #
	I0729 18:24:35.122471  123843 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 18:24:35.122479  123843 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 18:24:35.122484  123843 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 18:24:35.122489  123843 command_runner.go:130] > #
	I0729 18:24:35.122494  123843 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 18:24:35.122503  123843 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 18:24:35.122509  123843 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 18:24:35.122515  123843 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 18:24:35.122519  123843 command_runner.go:130] > #
	I0729 18:24:35.122522  123843 command_runner.go:130] > # default_mounts_file = ""
	I0729 18:24:35.122530  123843 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 18:24:35.122539  123843 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 18:24:35.122544  123843 command_runner.go:130] > pids_limit = 1024
	I0729 18:24:35.122555  123843 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 18:24:35.122562  123843 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 18:24:35.122568  123843 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 18:24:35.122578  123843 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 18:24:35.122583  123843 command_runner.go:130] > # log_size_max = -1
	I0729 18:24:35.122590  123843 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 18:24:35.122599  123843 command_runner.go:130] > # log_to_journald = false
	I0729 18:24:35.122607  123843 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 18:24:35.122612  123843 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 18:24:35.122626  123843 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 18:24:35.122633  123843 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 18:24:35.122638  123843 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 18:24:35.122644  123843 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 18:24:35.122649  123843 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 18:24:35.122654  123843 command_runner.go:130] > # read_only = false
	I0729 18:24:35.122659  123843 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 18:24:35.122667  123843 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 18:24:35.122674  123843 command_runner.go:130] > # live configuration reload.
	I0729 18:24:35.122677  123843 command_runner.go:130] > # log_level = "info"
	I0729 18:24:35.122685  123843 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 18:24:35.122689  123843 command_runner.go:130] > # This option supports live configuration reload.
	I0729 18:24:35.122695  123843 command_runner.go:130] > # log_filter = ""
	I0729 18:24:35.122701  123843 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 18:24:35.122710  123843 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 18:24:35.122717  123843 command_runner.go:130] > # separated by comma.
	I0729 18:24:35.122724  123843 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 18:24:35.122730  123843 command_runner.go:130] > # uid_mappings = ""
	I0729 18:24:35.122736  123843 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 18:24:35.122743  123843 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 18:24:35.122749  123843 command_runner.go:130] > # separated by comma.
	I0729 18:24:35.122756  123843 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 18:24:35.122762  123843 command_runner.go:130] > # gid_mappings = ""
	I0729 18:24:35.122768  123843 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 18:24:35.122776  123843 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 18:24:35.122782  123843 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 18:24:35.122791  123843 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 18:24:35.122797  123843 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 18:24:35.122804  123843 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 18:24:35.122812  123843 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 18:24:35.122818  123843 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 18:24:35.122827  123843 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 18:24:35.122835  123843 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 18:24:35.122841  123843 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 18:24:35.122849  123843 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 18:24:35.122861  123843 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 18:24:35.122872  123843 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 18:24:35.122880  123843 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 18:24:35.122888  123843 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 18:24:35.122893  123843 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 18:24:35.122900  123843 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 18:24:35.122904  123843 command_runner.go:130] > drop_infra_ctr = false
	I0729 18:24:35.122912  123843 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 18:24:35.122920  123843 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 18:24:35.122927  123843 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 18:24:35.122933  123843 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 18:24:35.122940  123843 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 18:24:35.122948  123843 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 18:24:35.122955  123843 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 18:24:35.122960  123843 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 18:24:35.122966  123843 command_runner.go:130] > # shared_cpuset = ""
	I0729 18:24:35.122971  123843 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 18:24:35.122976  123843 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 18:24:35.122982  123843 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 18:24:35.122988  123843 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 18:24:35.122995  123843 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 18:24:35.123000  123843 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 18:24:35.123008  123843 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 18:24:35.123012  123843 command_runner.go:130] > # enable_criu_support = false
	I0729 18:24:35.123017  123843 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 18:24:35.123025  123843 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 18:24:35.123031  123843 command_runner.go:130] > # enable_pod_events = false
	I0729 18:24:35.123037  123843 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 18:24:35.123045  123843 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 18:24:35.123052  123843 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 18:24:35.123056  123843 command_runner.go:130] > # default_runtime = "runc"
	I0729 18:24:35.123061  123843 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 18:24:35.123071  123843 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 18:24:35.123080  123843 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 18:24:35.123090  123843 command_runner.go:130] > # creation as a file is not desired either.
	I0729 18:24:35.123099  123843 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 18:24:35.123106  123843 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 18:24:35.123118  123843 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 18:24:35.123123  123843 command_runner.go:130] > # ]
	I0729 18:24:35.123129  123843 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 18:24:35.123138  123843 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 18:24:35.123145  123843 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 18:24:35.123150  123843 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 18:24:35.123155  123843 command_runner.go:130] > #
	I0729 18:24:35.123159  123843 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 18:24:35.123166  123843 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 18:24:35.123212  123843 command_runner.go:130] > # runtime_type = "oci"
	I0729 18:24:35.123219  123843 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 18:24:35.123224  123843 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 18:24:35.123228  123843 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 18:24:35.123232  123843 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 18:24:35.123236  123843 command_runner.go:130] > # monitor_env = []
	I0729 18:24:35.123240  123843 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 18:24:35.123247  123843 command_runner.go:130] > # allowed_annotations = []
	I0729 18:24:35.123253  123843 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 18:24:35.123259  123843 command_runner.go:130] > # Where:
	I0729 18:24:35.123264  123843 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 18:24:35.123272  123843 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 18:24:35.123279  123843 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 18:24:35.123287  123843 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 18:24:35.123292  123843 command_runner.go:130] > #   in $PATH.
	I0729 18:24:35.123299  123843 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 18:24:35.123305  123843 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 18:24:35.123311  123843 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 18:24:35.123317  123843 command_runner.go:130] > #   state.
	I0729 18:24:35.123322  123843 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 18:24:35.123335  123843 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 18:24:35.123346  123843 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 18:24:35.123357  123843 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 18:24:35.123368  123843 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 18:24:35.123380  123843 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 18:24:35.123393  123843 command_runner.go:130] > #   The currently recognized values are:
	I0729 18:24:35.123406  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 18:24:35.123426  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 18:24:35.123438  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 18:24:35.123449  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 18:24:35.123463  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 18:24:35.123472  123843 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 18:24:35.123481  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 18:24:35.123487  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 18:24:35.123495  123843 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 18:24:35.123501  123843 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 18:24:35.123507  123843 command_runner.go:130] > #   deprecated option "conmon".
	I0729 18:24:35.123514  123843 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 18:24:35.123521  123843 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 18:24:35.123527  123843 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 18:24:35.123534  123843 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 18:24:35.123540  123843 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 18:24:35.123547  123843 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 18:24:35.123558  123843 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 18:24:35.123569  123843 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 18:24:35.123573  123843 command_runner.go:130] > #
	I0729 18:24:35.123578  123843 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 18:24:35.123583  123843 command_runner.go:130] > #
	I0729 18:24:35.123588  123843 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 18:24:35.123596  123843 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 18:24:35.123600  123843 command_runner.go:130] > #
	I0729 18:24:35.123606  123843 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 18:24:35.123613  123843 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 18:24:35.123621  123843 command_runner.go:130] > #
	I0729 18:24:35.123627  123843 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 18:24:35.123633  123843 command_runner.go:130] > # feature.
	I0729 18:24:35.123636  123843 command_runner.go:130] > #
	I0729 18:24:35.123642  123843 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 18:24:35.123650  123843 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 18:24:35.123656  123843 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 18:24:35.123673  123843 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 18:24:35.123681  123843 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 18:24:35.123686  123843 command_runner.go:130] > #
	I0729 18:24:35.123696  123843 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 18:24:35.123705  123843 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 18:24:35.123710  123843 command_runner.go:130] > #
	I0729 18:24:35.123716  123843 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 18:24:35.123724  123843 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 18:24:35.123728  123843 command_runner.go:130] > #
	I0729 18:24:35.123734  123843 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 18:24:35.123743  123843 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 18:24:35.123747  123843 command_runner.go:130] > # limitation.
	I0729 18:24:35.123752  123843 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 18:24:35.123759  123843 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 18:24:35.123763  123843 command_runner.go:130] > runtime_type = "oci"
	I0729 18:24:35.123769  123843 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 18:24:35.123773  123843 command_runner.go:130] > runtime_config_path = ""
	I0729 18:24:35.123780  123843 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 18:24:35.123784  123843 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 18:24:35.123790  123843 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 18:24:35.123794  123843 command_runner.go:130] > monitor_env = [
	I0729 18:24:35.123803  123843 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 18:24:35.123808  123843 command_runner.go:130] > ]
	I0729 18:24:35.123812  123843 command_runner.go:130] > privileged_without_host_devices = false
	I0729 18:24:35.123818  123843 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 18:24:35.123826  123843 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 18:24:35.123831  123843 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 18:24:35.123841  123843 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 18:24:35.123850  123843 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 18:24:35.123857  123843 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 18:24:35.123866  123843 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 18:24:35.123875  123843 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 18:24:35.123880  123843 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 18:24:35.123886  123843 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 18:24:35.123890  123843 command_runner.go:130] > # Example:
	I0729 18:24:35.123894  123843 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 18:24:35.123898  123843 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 18:24:35.123905  123843 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 18:24:35.123909  123843 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 18:24:35.123918  123843 command_runner.go:130] > # cpuset = 0
	I0729 18:24:35.123923  123843 command_runner.go:130] > # cpushares = "0-1"
	I0729 18:24:35.123926  123843 command_runner.go:130] > # Where:
	I0729 18:24:35.123930  123843 command_runner.go:130] > # The workload name is workload-type.
	I0729 18:24:35.123936  123843 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 18:24:35.123941  123843 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 18:24:35.123946  123843 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 18:24:35.123959  123843 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 18:24:35.123964  123843 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 18:24:35.123969  123843 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 18:24:35.123975  123843 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 18:24:35.123978  123843 command_runner.go:130] > # Default value is set to true
	I0729 18:24:35.123982  123843 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 18:24:35.123989  123843 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 18:24:35.123996  123843 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 18:24:35.124000  123843 command_runner.go:130] > # Default value is set to 'false'
	I0729 18:24:35.124006  123843 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 18:24:35.124012  123843 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 18:24:35.124017  123843 command_runner.go:130] > #
	I0729 18:24:35.124023  123843 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 18:24:35.124030  123843 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 18:24:35.124036  123843 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 18:24:35.124044  123843 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 18:24:35.124050  123843 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 18:24:35.124056  123843 command_runner.go:130] > [crio.image]
	I0729 18:24:35.124061  123843 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 18:24:35.124067  123843 command_runner.go:130] > # default_transport = "docker://"
	I0729 18:24:35.124078  123843 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 18:24:35.124086  123843 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 18:24:35.124092  123843 command_runner.go:130] > # global_auth_file = ""
	I0729 18:24:35.124097  123843 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 18:24:35.124103  123843 command_runner.go:130] > # This option supports live configuration reload.
	I0729 18:24:35.124108  123843 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 18:24:35.124116  123843 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 18:24:35.124126  123843 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 18:24:35.124131  123843 command_runner.go:130] > # This option supports live configuration reload.
	I0729 18:24:35.124242  123843 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 18:24:35.124391  123843 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 18:24:35.124410  123843 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 18:24:35.124419  123843 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 18:24:35.124433  123843 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 18:24:35.124440  123843 command_runner.go:130] > # pause_command = "/pause"
	I0729 18:24:35.124449  123843 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 18:24:35.124462  123843 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 18:24:35.124475  123843 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 18:24:35.124489  123843 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 18:24:35.124509  123843 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 18:24:35.124523  123843 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 18:24:35.124528  123843 command_runner.go:130] > # pinned_images = [
	I0729 18:24:35.124533  123843 command_runner.go:130] > # ]
	I0729 18:24:35.124552  123843 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 18:24:35.124561  123843 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 18:24:35.124576  123843 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 18:24:35.124585  123843 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 18:24:35.124633  123843 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 18:24:35.124684  123843 command_runner.go:130] > # signature_policy = ""
	I0729 18:24:35.124695  123843 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 18:24:35.124718  123843 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 18:24:35.124734  123843 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 18:24:35.124744  123843 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 18:24:35.124757  123843 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 18:24:35.124768  123843 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 18:24:35.124778  123843 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 18:24:35.124793  123843 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 18:24:35.124799  123843 command_runner.go:130] > # changing them here.
	I0729 18:24:35.124808  123843 command_runner.go:130] > # insecure_registries = [
	I0729 18:24:35.124813  123843 command_runner.go:130] > # ]
	I0729 18:24:35.124827  123843 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 18:24:35.124835  123843 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 18:24:35.124841  123843 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 18:24:35.124849  123843 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 18:24:35.124875  123843 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 18:24:35.124890  123843 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 18:24:35.124895  123843 command_runner.go:130] > # CNI plugins.
	I0729 18:24:35.124901  123843 command_runner.go:130] > [crio.network]
	I0729 18:24:35.124914  123843 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 18:24:35.124923  123843 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 18:24:35.124929  123843 command_runner.go:130] > # cni_default_network = ""
	I0729 18:24:35.124937  123843 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 18:24:35.124948  123843 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 18:24:35.124956  123843 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 18:24:35.124961  123843 command_runner.go:130] > # plugin_dirs = [
	I0729 18:24:35.124968  123843 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 18:24:35.124972  123843 command_runner.go:130] > # ]
	I0729 18:24:35.124986  123843 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 18:24:35.124992  123843 command_runner.go:130] > [crio.metrics]
	I0729 18:24:35.124999  123843 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 18:24:35.125005  123843 command_runner.go:130] > enable_metrics = true
	I0729 18:24:35.125017  123843 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 18:24:35.125024  123843 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 18:24:35.125040  123843 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 18:24:35.125055  123843 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 18:24:35.125063  123843 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 18:24:35.125069  123843 command_runner.go:130] > # metrics_collectors = [
	I0729 18:24:35.125075  123843 command_runner.go:130] > # 	"operations",
	I0729 18:24:35.125084  123843 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 18:24:35.125095  123843 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 18:24:35.125101  123843 command_runner.go:130] > # 	"operations_errors",
	I0729 18:24:35.125107  123843 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 18:24:35.125113  123843 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 18:24:35.125125  123843 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 18:24:35.125137  123843 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 18:24:35.125143  123843 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 18:24:35.125150  123843 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 18:24:35.125156  123843 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 18:24:35.125163  123843 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 18:24:35.125174  123843 command_runner.go:130] > # 	"containers_oom_total",
	I0729 18:24:35.125181  123843 command_runner.go:130] > # 	"containers_oom",
	I0729 18:24:35.125188  123843 command_runner.go:130] > # 	"processes_defunct",
	I0729 18:24:35.125193  123843 command_runner.go:130] > # 	"operations_total",
	I0729 18:24:35.125200  123843 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 18:24:35.125213  123843 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 18:24:35.125219  123843 command_runner.go:130] > # 	"operations_errors_total",
	I0729 18:24:35.125226  123843 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 18:24:35.125232  123843 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 18:24:35.125239  123843 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 18:24:35.125251  123843 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 18:24:35.125261  123843 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 18:24:35.125268  123843 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 18:24:35.125275  123843 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 18:24:35.125287  123843 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 18:24:35.125292  123843 command_runner.go:130] > # ]
	I0729 18:24:35.125300  123843 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 18:24:35.125307  123843 command_runner.go:130] > # metrics_port = 9090
	I0729 18:24:35.125314  123843 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 18:24:35.125325  123843 command_runner.go:130] > # metrics_socket = ""
	I0729 18:24:35.125338  123843 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 18:24:35.125348  123843 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 18:24:35.125362  123843 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 18:24:35.125369  123843 command_runner.go:130] > # certificate on any modification event.
	I0729 18:24:35.125375  123843 command_runner.go:130] > # metrics_cert = ""
	I0729 18:24:35.125382  123843 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 18:24:35.125394  123843 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 18:24:35.125400  123843 command_runner.go:130] > # metrics_key = ""
	I0729 18:24:35.125409  123843 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 18:24:35.125414  123843 command_runner.go:130] > [crio.tracing]
	I0729 18:24:35.125428  123843 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 18:24:35.125435  123843 command_runner.go:130] > # enable_tracing = false
	I0729 18:24:35.125443  123843 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 18:24:35.125450  123843 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 18:24:35.125469  123843 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 18:24:35.125476  123843 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 18:24:35.125482  123843 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 18:24:35.125488  123843 command_runner.go:130] > [crio.nri]
	I0729 18:24:35.125513  123843 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 18:24:35.125519  123843 command_runner.go:130] > # enable_nri = false
	I0729 18:24:35.125526  123843 command_runner.go:130] > # NRI socket to listen on.
	I0729 18:24:35.125533  123843 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 18:24:35.125539  123843 command_runner.go:130] > # NRI plugin directory to use.
	I0729 18:24:35.125551  123843 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 18:24:35.125558  123843 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 18:24:35.125566  123843 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 18:24:35.125574  123843 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 18:24:35.125585  123843 command_runner.go:130] > # nri_disable_connections = false
	I0729 18:24:35.125645  123843 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 18:24:35.125681  123843 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 18:24:35.125691  123843 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 18:24:35.125704  123843 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 18:24:35.125725  123843 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 18:24:35.125735  123843 command_runner.go:130] > [crio.stats]
	I0729 18:24:35.125757  123843 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 18:24:35.125776  123843 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 18:24:35.125789  123843 command_runner.go:130] > # stats_collection_period = 0
	I0729 18:24:35.126194  123843 cni.go:84] Creating CNI manager for ""
	I0729 18:24:35.126206  123843 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 18:24:35.126216  123843 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:24:35.126238  123843 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-976328 NodeName:multinode-976328 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:24:35.126369  123843 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-976328"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:24:35.126434  123843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:24:35.136673  123843 command_runner.go:130] > kubeadm
	I0729 18:24:35.136689  123843 command_runner.go:130] > kubectl
	I0729 18:24:35.136694  123843 command_runner.go:130] > kubelet
	I0729 18:24:35.136718  123843 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:24:35.136779  123843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:24:35.146215  123843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 18:24:35.162281  123843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:24:35.178642  123843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 18:24:35.194838  123843 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0729 18:24:35.198434  123843 command_runner.go:130] > 192.168.39.211	control-plane.minikube.internal
	I0729 18:24:35.198575  123843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:24:35.338496  123843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:24:35.353476  123843 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328 for IP: 192.168.39.211
	I0729 18:24:35.353502  123843 certs.go:194] generating shared ca certs ...
	I0729 18:24:35.353521  123843 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:24:35.353706  123843 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 18:24:35.353772  123843 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 18:24:35.353786  123843 certs.go:256] generating profile certs ...
	I0729 18:24:35.353885  123843 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/client.key
	I0729 18:24:35.353958  123843 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/apiserver.key.21ce94e8
	I0729 18:24:35.354020  123843 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/proxy-client.key
	I0729 18:24:35.354034  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:24:35.354049  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:24:35.354067  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:24:35.354085  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:24:35.354101  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:24:35.354120  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:24:35.354134  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:24:35.354151  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:24:35.354219  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 18:24:35.354260  123843 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 18:24:35.354274  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:24:35.354306  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:24:35.354337  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:24:35.354367  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 18:24:35.354416  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:24:35.354459  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem -> /usr/share/ca-certificates/95282.pem
	I0729 18:24:35.354476  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /usr/share/ca-certificates/952822.pem
	I0729 18:24:35.354491  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:24:35.355285  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:24:35.379278  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:24:35.402310  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:24:35.425849  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:24:35.448835  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:24:35.472586  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:24:35.495906  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:24:35.518833  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:24:35.541346  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 18:24:35.576993  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 18:24:35.671312  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:24:35.736086  123843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:24:35.777952  123843 ssh_runner.go:195] Run: openssl version
	I0729 18:24:35.806550  123843 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 18:24:35.811701  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 18:24:35.827448  123843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 18:24:35.835446  123843 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:24:35.835477  123843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:24:35.835520  123843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 18:24:35.863796  123843 command_runner.go:130] > 3ec20f2e
	I0729 18:24:35.863905  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:24:35.879826  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:24:35.896285  123843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:24:35.902246  123843 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:24:35.904643  123843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:24:35.904703  123843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:24:35.921269  123843 command_runner.go:130] > b5213941
	I0729 18:24:35.921379  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:24:35.937651  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 18:24:35.954984  123843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 18:24:35.964220  123843 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:24:35.964706  123843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:24:35.964761  123843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 18:24:35.977113  123843 command_runner.go:130] > 51391683
	I0729 18:24:35.977212  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 18:24:36.003133  123843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:24:36.024906  123843 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:24:36.024936  123843 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 18:24:36.024944  123843 command_runner.go:130] > Device: 253,1	Inode: 6292011     Links: 1
	I0729 18:24:36.024952  123843 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 18:24:36.024961  123843 command_runner.go:130] > Access: 2024-07-29 18:17:46.106282779 +0000
	I0729 18:24:36.024968  123843 command_runner.go:130] > Modify: 2024-07-29 18:17:46.106282779 +0000
	I0729 18:24:36.024975  123843 command_runner.go:130] > Change: 2024-07-29 18:17:46.106282779 +0000
	I0729 18:24:36.024982  123843 command_runner.go:130] >  Birth: 2024-07-29 18:17:46.106282779 +0000
	I0729 18:24:36.025044  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:24:36.034421  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.034610  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:24:36.046110  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.046331  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:24:36.061864  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.062069  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:24:36.078654  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.078926  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:24:36.084908  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.085129  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:24:36.090925  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.091183  123843 kubeadm.go:392] StartCluster: {Name:multinode-976328 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-976328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:24:36.091337  123843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:24:36.091398  123843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:24:36.155901  123843 command_runner.go:130] > 71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9
	I0729 18:24:36.155933  123843 command_runner.go:130] > 380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56
	I0729 18:24:36.155943  123843 command_runner.go:130] > 8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed
	I0729 18:24:36.155955  123843 command_runner.go:130] > ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052
	I0729 18:24:36.155964  123843 command_runner.go:130] > fc72e5cd2f6959f4a5c3767fd52eb35adddd720c79581453e188841b8961736d
	I0729 18:24:36.155971  123843 command_runner.go:130] > 1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f
	I0729 18:24:36.155980  123843 command_runner.go:130] > fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac
	I0729 18:24:36.155998  123843 command_runner.go:130] > 220c67ac7bb003b3f5eb10ef9500671e3f6242855a58efc5750688b8faa63850
	I0729 18:24:36.156011  123843 command_runner.go:130] > 551d37c89df791c8d7c7ced8d5c57332a6b4a2783a737d5dbdd75763e5784414
	I0729 18:24:36.156019  123843 command_runner.go:130] > 3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9
	I0729 18:24:36.156028  123843 command_runner.go:130] > 2927818faccc0686b610f0146bcd8c41985710fdcaa02ee5353cc058348cdf6a
	I0729 18:24:36.156061  123843 cri.go:89] found id: "71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9"
	I0729 18:24:36.156074  123843 cri.go:89] found id: "380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56"
	I0729 18:24:36.156079  123843 cri.go:89] found id: "8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed"
	I0729 18:24:36.156084  123843 cri.go:89] found id: "ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052"
	I0729 18:24:36.156093  123843 cri.go:89] found id: "fc72e5cd2f6959f4a5c3767fd52eb35adddd720c79581453e188841b8961736d"
	I0729 18:24:36.156097  123843 cri.go:89] found id: "1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f"
	I0729 18:24:36.156102  123843 cri.go:89] found id: "fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac"
	I0729 18:24:36.156109  123843 cri.go:89] found id: "220c67ac7bb003b3f5eb10ef9500671e3f6242855a58efc5750688b8faa63850"
	I0729 18:24:36.156114  123843 cri.go:89] found id: "551d37c89df791c8d7c7ced8d5c57332a6b4a2783a737d5dbdd75763e5784414"
	I0729 18:24:36.156126  123843 cri.go:89] found id: "3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9"
	I0729 18:24:36.156134  123843 cri.go:89] found id: "2927818faccc0686b610f0146bcd8c41985710fdcaa02ee5353cc058348cdf6a"
	I0729 18:24:36.156139  123843 cri.go:89] found id: ""
	I0729 18:24:36.156201  123843 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.400074470Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b9873fe03dfd66dac4612fc75aad173260796756b67a32a4e0c38b2ca71fba9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1722277489866456377,StartedAt:1722277489939506215,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e79d457a0c1c2c2d64935c1d26063957/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e79d457a0c1c2c2d64935c1d26063957/containers/kube-apiserver/21ebbc71,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Containe
rPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-multinode-976328_e79d457a0c1c2c2d64935c1d26063957/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5299c387-2835-4e45-a459-b71adc7bf664 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.400658681Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:99890209de3349ae1acf10d288f7150011e6631a63614be3a0c65a1939bd9b6e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=65d8357e-10e7-438f-a7a7-6b976d341b2c name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.400786527Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:99890209de3349ae1acf10d288f7150011e6631a63614be3a0c65a1939bd9b6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1722277489812908661,StartedAt:1722277489888265405,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a4ee0dfa83d8a84f968bb69f76db985b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a4ee0dfa83d8a84f968bb69f76db985b/containers/etcd/3bdde813,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-m
ultinode-976328_a4ee0dfa83d8a84f968bb69f76db985b/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=65d8357e-10e7-438f-a7a7-6b976d341b2c name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.401327047Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:2157c2885301b6ac6a8e148e42e6f0b9ef4d92b772885763166f26ffd267cf4c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1b35eb3a-3ab8-4f47-bed9-03ff21bad7d8 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.401471017Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2157c2885301b6ac6a8e148e42e6f0b9ef4d92b772885763166f26ffd267cf4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1722277486499961709,StartedAt:1722277486537880931,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"co
ntainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/c72421fc-93fc-42d7-8a68-93fe1f74686f/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c72421fc-93fc-42d7-8a68-93fe1f74686f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c72421fc-93fc-42d7-8a68-93fe1f74686f/containers/coredns/9bd4953c,Readonly:false,SelinuxRelabel:false,Propagation
:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/c72421fc-93fc-42d7-8a68-93fe1f74686f/volumes/kubernetes.io~projected/kube-api-access-w5d62,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-sls9j_c72421fc-93fc-42d7-8a68-93fe1f74686f/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1b35eb3a-3ab8-4f47-bed9-03ff21bad7d8 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.402199128Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9604b38a357c90bcf05ce45f13f6a4fb7d4dc430b9394b7eb8154f86776bd074,Verbose:false,}" file="otel-collector/interceptors.go:62" id=733b7009-4b32-4a0c-b453-23118421ffcb name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.402318239Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9604b38a357c90bcf05ce45f13f6a4fb7d4dc430b9394b7eb8154f86776bd074,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1722277484740785916,StartedAt:1722277484766259901,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kindest/kindnetd:v20240719-e7903573,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f226ace9-e1df-4171-bd7a-80c663032a34/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f226ace9-e1df-4171-bd7a-80c663032a34/containers/kindnet-cni/bf2554eb,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/cni/net.d,HostPath
:/etc/cni/net.d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/f226ace9-e1df-4171-bd7a-80c663032a34/volumes/kubernetes.io~projected/kube-api-access-7w9n8,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kindnet-ttmqz_f226ace9-e1df-4171-bd7a-80c663032a34/kindnet-cni/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:10000,CpuShares:102,MemoryLimitInBytes:52428800,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:52428800,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=733b7009-4b32-4a0c-b453-231184
21ffcb name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.403870798Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d01c9ad4df1fab0ab613b4207329e5ba2139310e3edf3d01a207fd27a46a48df,Verbose:false,}" file="otel-collector/interceptors.go:62" id=6917b466-8856-42b6-b372-b2999733e59e name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.403980263Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d01c9ad4df1fab0ab613b4207329e5ba2139310e3edf3d01a207fd27a46a48df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1722277483365120640,StartedAt:1722277483392499862,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d116a5b3-2d88-4c19-862a-ce4e6100b5c9/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d116a5b3-2d88-4c19-862a-ce4e6100b5c9/containers/kube-proxy/16aaf876,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/
lib/kubelet/pods/d116a5b3-2d88-4c19-862a-ce4e6100b5c9/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d116a5b3-2d88-4c19-862a-ce4e6100b5c9/volumes/kubernetes.io~projected/kube-api-access-5tntc,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-5hqrk_d116a5b3-2d88-4c19-862a-ce4e6100b5c9/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-c
ollector/interceptors.go:74" id=6917b466-8856-42b6-b372-b2999733e59e name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.404508157Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:54847580765e8923b115405b28cd71d12c1fe0b6a89f33710e310d7720dc35bb,Verbose:false,}" file="otel-collector/interceptors.go:62" id=45d5603d-0377-42e5-aa4c-5aa82502d07c name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.404646795Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:54847580765e8923b115405b28cd71d12c1fe0b6a89f33710e310d7720dc35bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1722277481390914245,StartedAt:1722277481437556514,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9505709dfd9b02aeb696ed23f164e402/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9505709dfd9b02aeb696ed23f164e402/containers/kube-controller-manager/34333207,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,
UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-multinode-976328_9505709dfd9b02aeb696ed23f164e402/kube-controller-manager/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMem
s:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=45d5603d-0377-42e5-aa4c-5aa82502d07c name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.419987821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48000535-9f01-4f02-9634-a70edcef9a25 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.420064806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48000535-9f01-4f02-9634-a70edcef9a25 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.421034954Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f34e61a2-426d-4625-be1c-f09a52472fb7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.421526309Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277589421507144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f34e61a2-426d-4625-be1c-f09a52472fb7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.421941224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfacbb67-b3e1-4d08-b901-e5d502acaf4e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.422011685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfacbb67-b3e1-4d08-b901-e5d502acaf4e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.422405015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd4bedb03eccdac261a791239bb1da575e1e9ef2a04f1e29ab0d460d98a719a3,PodSandboxId:27edcb9cac743e5e25ce7c44c3a05aab42481e0c908e3be763040d245eefeca5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722277509544894990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942abb259c7e41ee6bcc94c52829c2230867d3047c11119053032f3fc5a82fbf,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277493577273802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container.hash: 84b891e1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192bdf369e557b0185eaaf55a2b84baee3b192d72ca04820b103f49a17302a92,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277489766213141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9873fe03dfd66dac4612fc75aad173260796756b67a32a4e0c38b2ca71fba9c,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277489769215189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99890209de3349ae1acf10d288f7150011e6631a63614be3a0c65a1939bd9b6e,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277489759399643,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2157c2885301b6ac6a8e148e42e6f0b9ef4d92b772885763166f26ffd267cf4c,PodSandboxId:85b2e4245c414d7945daa446a343cbc420696c62d526d2a94e4ba24f48f6efae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277486452720732,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:166885d3e009fa3e0be8b5922d734ccb35202f737b5efbd4ececb4b8ac56acec,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277486396048984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container
.hash: 84b891e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9604b38a357c90bcf05ce45f13f6a4fb7d4dc430b9394b7eb8154f86776bd074,PodSandboxId:e454bd2f9f0a2501d9c2d45b8e358a024438d4f5d9b9567ddf9e408deeabaaa6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722277484565797033,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.cont
ainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01c9ad4df1fab0ab613b4207329e5ba2139310e3edf3d01a207fd27a46a48df,PodSandboxId:fe6973ba344afaa84337f9cbdc74a64a090277c02b5c50ca14ac71a04f91f1a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277483332093931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54847580765e8923b115405b28cd71d12c1fe0b6a89f33710e310d7720dc35bb,PodSandboxId:16e2eb1765c611ea056238b46c0f275c8501d96377c8c865393df5743bcbc044,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277481356730172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722277475804732209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722277475761559165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722277475742515222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad7ab677d3311a89174206ae528f753ea5439656ab7db7cad86b4685066b7465,PodSandboxId:cc5d72b3c3274f25f18b24ce04d4db8a40467c9b039ad699870a2444b538dce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722277156836871571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052,PodSandboxId:e3756b7a777ec337e45a3be46d6644245b5cbdcab43bb99a73fbab59237098f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722277102936035309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f,PodSandboxId:838c7abd5f0e6ea85cb6374de70e5372923e2e8b7c49a0e36552fed0a5dd68a8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722277090774863426,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663
032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac,PodSandboxId:d1d827567ad4e3c5fa168c044f54ff6a6363a7abfc8dfeecfa4c1f95dcc69fb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722277088426604466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9,PodSandboxId:196c65306bd33d78cee65d65848a13eca37b837a106f9be20bfeae8170a0b9bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722277069074575446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfacbb67-b3e1-4d08-b901-e5d502acaf4e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.461972691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e79dbe4-a16d-411c-9889-c6e45bf091cd name=/runtime.v1.RuntimeService/Version
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.462063425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e79dbe4-a16d-411c-9889-c6e45bf091cd name=/runtime.v1.RuntimeService/Version
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.463675712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7201433b-974d-4c43-9bc4-a5eb44e1802d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.464370830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277589464347846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7201433b-974d-4c43-9bc4-a5eb44e1802d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.465054721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90629010-bcce-41a5-80d8-0cac7ed3dfc0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.465239670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90629010-bcce-41a5-80d8-0cac7ed3dfc0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:26:29 multinode-976328 crio[2915]: time="2024-07-29 18:26:29.465590146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd4bedb03eccdac261a791239bb1da575e1e9ef2a04f1e29ab0d460d98a719a3,PodSandboxId:27edcb9cac743e5e25ce7c44c3a05aab42481e0c908e3be763040d245eefeca5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722277509544894990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942abb259c7e41ee6bcc94c52829c2230867d3047c11119053032f3fc5a82fbf,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277493577273802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container.hash: 84b891e1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192bdf369e557b0185eaaf55a2b84baee3b192d72ca04820b103f49a17302a92,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277489766213141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9873fe03dfd66dac4612fc75aad173260796756b67a32a4e0c38b2ca71fba9c,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277489769215189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99890209de3349ae1acf10d288f7150011e6631a63614be3a0c65a1939bd9b6e,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277489759399643,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2157c2885301b6ac6a8e148e42e6f0b9ef4d92b772885763166f26ffd267cf4c,PodSandboxId:85b2e4245c414d7945daa446a343cbc420696c62d526d2a94e4ba24f48f6efae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277486452720732,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:166885d3e009fa3e0be8b5922d734ccb35202f737b5efbd4ececb4b8ac56acec,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277486396048984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container
.hash: 84b891e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9604b38a357c90bcf05ce45f13f6a4fb7d4dc430b9394b7eb8154f86776bd074,PodSandboxId:e454bd2f9f0a2501d9c2d45b8e358a024438d4f5d9b9567ddf9e408deeabaaa6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722277484565797033,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.cont
ainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01c9ad4df1fab0ab613b4207329e5ba2139310e3edf3d01a207fd27a46a48df,PodSandboxId:fe6973ba344afaa84337f9cbdc74a64a090277c02b5c50ca14ac71a04f91f1a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277483332093931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54847580765e8923b115405b28cd71d12c1fe0b6a89f33710e310d7720dc35bb,PodSandboxId:16e2eb1765c611ea056238b46c0f275c8501d96377c8c865393df5743bcbc044,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277481356730172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722277475804732209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722277475761559165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722277475742515222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad7ab677d3311a89174206ae528f753ea5439656ab7db7cad86b4685066b7465,PodSandboxId:cc5d72b3c3274f25f18b24ce04d4db8a40467c9b039ad699870a2444b538dce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722277156836871571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052,PodSandboxId:e3756b7a777ec337e45a3be46d6644245b5cbdcab43bb99a73fbab59237098f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722277102936035309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f,PodSandboxId:838c7abd5f0e6ea85cb6374de70e5372923e2e8b7c49a0e36552fed0a5dd68a8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722277090774863426,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663
032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac,PodSandboxId:d1d827567ad4e3c5fa168c044f54ff6a6363a7abfc8dfeecfa4c1f95dcc69fb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722277088426604466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9,PodSandboxId:196c65306bd33d78cee65d65848a13eca37b837a106f9be20bfeae8170a0b9bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722277069074575446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90629010-bcce-41a5-80d8-0cac7ed3dfc0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bd4bedb03eccd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   27edcb9cac743       busybox-fc5497c4f-mdnj5
	942abb259c7e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       2                   15d282b4a9515       storage-provisioner
	b9873fe03dfd6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            2                   94083e46604e3       kube-apiserver-multinode-976328
	192bdf369e557       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            2                   26bae25767fd9       kube-scheduler-multinode-976328
	99890209de334       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      2                   c08bff072868f       etcd-multinode-976328
	2157c2885301b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   85b2e4245c414       coredns-7db6d8ff4d-sls9j
	166885d3e009f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       1                   15d282b4a9515       storage-provisioner
	9604b38a357c9       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   e454bd2f9f0a2       kindnet-ttmqz
	d01c9ad4df1fa       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   fe6973ba344af       kube-proxy-5hqrk
	54847580765e8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   16e2eb1765c61       kube-controller-manager-multinode-976328
	71846f8a18b82       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Exited              etcd                      1                   c08bff072868f       etcd-multinode-976328
	380c57a942e9b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Exited              kube-apiserver            1                   94083e46604e3       kube-apiserver-multinode-976328
	8b1718df722bf       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Exited              kube-scheduler            1                   26bae25767fd9       kube-scheduler-multinode-976328
	ad7ab677d3311       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   cc5d72b3c3274       busybox-fc5497c4f-mdnj5
	ede7653ba82d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   e3756b7a777ec       coredns-7db6d8ff4d-sls9j
	1b584ffa95698       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   838c7abd5f0e6       kindnet-ttmqz
	fd327222d7f72       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   d1d827567ad4e       kube-proxy-5hqrk
	3b8f2b9512e35       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   196c65306bd33       kube-controller-manager-multinode-976328
	
	
	==> coredns [2157c2885301b6ac6a8e148e42e6f0b9ef4d92b772885763166f26ffd267cf4c] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41197 - 3790 "HINFO IN 1693272628972894029.8360814626276234203. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008625675s
	
	
	==> coredns [ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052] <==
	[INFO] 10.244.1.2:33062 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004045534s
	[INFO] 10.244.1.2:56411 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098408s
	[INFO] 10.244.1.2:40888 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108015s
	[INFO] 10.244.1.2:60897 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001620298s
	[INFO] 10.244.1.2:37011 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063055s
	[INFO] 10.244.1.2:41176 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069437s
	[INFO] 10.244.1.2:34052 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087464s
	[INFO] 10.244.0.3:54166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132608s
	[INFO] 10.244.0.3:46094 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000043269s
	[INFO] 10.244.0.3:40883 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000036608s
	[INFO] 10.244.0.3:45269 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034177s
	[INFO] 10.244.1.2:57880 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103022s
	[INFO] 10.244.1.2:58599 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075324s
	[INFO] 10.244.1.2:33226 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063607s
	[INFO] 10.244.1.2:36852 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059955s
	[INFO] 10.244.0.3:42550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000214672s
	[INFO] 10.244.0.3:42550 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014319s
	[INFO] 10.244.0.3:33082 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000099951s
	[INFO] 10.244.0.3:37802 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092353s
	[INFO] 10.244.1.2:48413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219781s
	[INFO] 10.244.1.2:54768 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105088s
	[INFO] 10.244.1.2:34397 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00019185s
	[INFO] 10.244.1.2:48793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089391s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-976328
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-976328
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=multinode-976328
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_17_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:17:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-976328
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:26:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:24:52 +0000   Mon, 29 Jul 2024 18:17:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:24:52 +0000   Mon, 29 Jul 2024 18:17:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:24:52 +0000   Mon, 29 Jul 2024 18:17:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:24:52 +0000   Mon, 29 Jul 2024 18:18:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    multinode-976328
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e41ff0df74d7477398733fc105040655
	  System UUID:                e41ff0df-74d7-4773-9873-3fc105040655
	  Boot ID:                    79341e3d-5dfe-46e4-808a-ad4755aae2e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mdnj5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 coredns-7db6d8ff4d-sls9j                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m21s
	  kube-system                 etcd-multinode-976328                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m35s
	  kube-system                 kindnet-ttmqz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m22s
	  kube-system                 kube-apiserver-multinode-976328             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-controller-manager-multinode-976328    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-proxy-5hqrk                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 kube-scheduler-multinode-976328             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m20s                  kube-proxy       
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  Starting                 8m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m41s (x8 over 8m41s)  kubelet          Node multinode-976328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m41s (x8 over 8m41s)  kubelet          Node multinode-976328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m41s (x7 over 8m41s)  kubelet          Node multinode-976328 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m35s                  kubelet          Node multinode-976328 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m35s                  kubelet          Node multinode-976328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m35s                  kubelet          Node multinode-976328 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m35s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m22s                  node-controller  Node multinode-976328 event: Registered Node multinode-976328 in Controller
	  Normal  NodeReady                8m7s                   kubelet          Node multinode-976328 status is now: NodeReady
	  Normal  Starting                 100s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s (x8 over 100s)    kubelet          Node multinode-976328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x8 over 100s)    kubelet          Node multinode-976328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x7 over 100s)    kubelet          Node multinode-976328 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           84s                    node-controller  Node multinode-976328 event: Registered Node multinode-976328 in Controller
	
	
	Name:               multinode-976328-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-976328-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=multinode-976328
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_25_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:25:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-976328-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:26:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:26:03 +0000   Mon, 29 Jul 2024 18:25:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:26:03 +0000   Mon, 29 Jul 2024 18:25:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:26:03 +0000   Mon, 29 Jul 2024 18:25:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:26:03 +0000   Mon, 29 Jul 2024 18:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    multinode-976328-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd9ef8f6b46f4ea5ac84757271819bbd
	  System UUID:                cd9ef8f6-b46f-4ea5-ac84-757271819bbd
	  Boot ID:                    02c53be0-542f-46d0-89f9-0a1a0168f13a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cvmvd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kindnet-bgn52              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m35s
	  kube-system                 kube-proxy-kj7zh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m31s                  kube-proxy  
	  Normal  Starting                 52s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m36s (x2 over 7m36s)  kubelet     Node multinode-976328-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s (x2 over 7m36s)  kubelet     Node multinode-976328-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s (x2 over 7m36s)  kubelet     Node multinode-976328-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m16s                  kubelet     Node multinode-976328-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  57s (x2 over 57s)      kubelet     Node multinode-976328-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x2 over 57s)      kubelet     Node multinode-976328-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x2 over 57s)      kubelet     Node multinode-976328-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  57s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                39s                    kubelet     Node multinode-976328-m02 status is now: NodeReady
	
	
	Name:               multinode-976328-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-976328-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=multinode-976328
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_26_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:26:08 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-976328-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:26:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:26:26 +0000   Mon, 29 Jul 2024 18:26:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:26:26 +0000   Mon, 29 Jul 2024 18:26:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:26:26 +0000   Mon, 29 Jul 2024 18:26:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:26:26 +0000   Mon, 29 Jul 2024 18:26:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    multinode-976328-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bcc82f8e0eec41ce808c06f7bd64b7ca
	  System UUID:                bcc82f8e-0eec-41ce-808c-06f7bd64b7ca
	  Boot ID:                    b047d2c9-a46e-40ad-b7fe-74699ebf02b0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jj2s8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m41s
	  kube-system                 kube-proxy-nwpsp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m36s                  kube-proxy  
	  Normal  Starting                 16s                    kube-proxy  
	  Normal  Starting                 5m49s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m42s (x2 over 6m42s)  kubelet     Node multinode-976328-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x2 over 6m42s)  kubelet     Node multinode-976328-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x2 over 6m42s)  kubelet     Node multinode-976328-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m23s                  kubelet     Node multinode-976328-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m54s (x2 over 5m54s)  kubelet     Node multinode-976328-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m54s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m54s (x2 over 5m54s)  kubelet     Node multinode-976328-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m54s (x2 over 5m54s)  kubelet     Node multinode-976328-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m36s                  kubelet     Node multinode-976328-m03 status is now: NodeReady
	  Normal  Starting                 21s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet     Node multinode-976328-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet     Node multinode-976328-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet     Node multinode-976328-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-976328-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058313] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.175121] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.142401] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.264451] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.061507] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +3.800230] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.063793] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989440] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.077047] kauditd_printk_skb: 69 callbacks suppressed
	[Jul29 18:18] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.103491] systemd-fstab-generator[1509]: Ignoring "noauto" option for root device
	[ +14.437930] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 18:19] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 18:24] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.137021] systemd-fstab-generator[2846]: Ignoring "noauto" option for root device
	[  +0.178863] systemd-fstab-generator[2860]: Ignoring "noauto" option for root device
	[  +0.139088] systemd-fstab-generator[2872]: Ignoring "noauto" option for root device
	[  +0.278881] systemd-fstab-generator[2900]: Ignoring "noauto" option for root device
	[  +4.814543] systemd-fstab-generator[3009]: Ignoring "noauto" option for root device
	[  +0.082402] kauditd_printk_skb: 105 callbacks suppressed
	[  +6.004651] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.019245] kauditd_printk_skb: 20 callbacks suppressed
	[  +2.696005] systemd-fstab-generator[3864]: Ignoring "noauto" option for root device
	[  +3.760683] kauditd_printk_skb: 55 callbacks suppressed
	[Jul29 18:25] systemd-fstab-generator[4272]: Ignoring "noauto" option for root device
	
	
	==> etcd [71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9] <==
	{"level":"info","ts":"2024-07-29T18:24:36.123307Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.06495ms"}
	{"level":"info","ts":"2024-07-29T18:24:36.15922Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T18:24:36.177027Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","commit-index":982}
	{"level":"info","ts":"2024-07-29T18:24:36.177205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T18:24:36.177262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became follower at term 2"}
	{"level":"info","ts":"2024-07-29T18:24:36.177274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d3f1da2044f49cdd [peers: [], term: 2, commit: 982, applied: 0, lastindex: 982, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T18:24:36.181592Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T18:24:36.207434Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":897}
	{"level":"info","ts":"2024-07-29T18:24:36.216949Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T18:24:36.222047Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d3f1da2044f49cdd","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:24:36.223119Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d3f1da2044f49cdd"}
	{"level":"info","ts":"2024-07-29T18:24:36.223264Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"d3f1da2044f49cdd","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T18:24:36.223481Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:36.223721Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:36.2238Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:36.224101Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T18:24:36.225101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd switched to configuration voters=(15272227643520752861)"}
	{"level":"info","ts":"2024-07-29T18:24:36.225361Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","added-peer-id":"d3f1da2044f49cdd","added-peer-peer-urls":["https://192.168.39.211:2380"]}
	{"level":"info","ts":"2024-07-29T18:24:36.225606Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:24:36.225717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:24:36.240678Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T18:24:36.240973Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d3f1da2044f49cdd","initial-advertise-peer-urls":["https://192.168.39.211:2380"],"listen-peer-urls":["https://192.168.39.211:2380"],"advertise-client-urls":["https://192.168.39.211:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.211:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T18:24:36.241032Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T18:24:36.241964Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.211:2380"}
	{"level":"info","ts":"2024-07-29T18:24:36.242014Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.211:2380"}
	
	
	==> etcd [99890209de3349ae1acf10d288f7150011e6631a63614be3a0c65a1939bd9b6e] <==
	{"level":"info","ts":"2024-07-29T18:24:50.03168Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:50.028088Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T18:24:50.032002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd switched to configuration voters=(15272227643520752861)"}
	{"level":"info","ts":"2024-07-29T18:24:50.032266Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.211:2380"}
	{"level":"info","ts":"2024-07-29T18:24:50.03336Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T18:24:50.035246Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:50.035491Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:50.035397Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","added-peer-id":"d3f1da2044f49cdd","added-peer-peer-urls":["https://192.168.39.211:2380"]}
	{"level":"info","ts":"2024-07-29T18:24:50.035658Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:24:50.035713Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:24:50.035434Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.211:2380"}
	{"level":"info","ts":"2024-07-29T18:24:51.095331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T18:24:51.095397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T18:24:51.095428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd received MsgPreVoteResp from d3f1da2044f49cdd at term 2"}
	{"level":"info","ts":"2024-07-29T18:24:51.09544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T18:24:51.095474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd received MsgVoteResp from d3f1da2044f49cdd at term 3"}
	{"level":"info","ts":"2024-07-29T18:24:51.095494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became leader at term 3"}
	{"level":"info","ts":"2024-07-29T18:24:51.095508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3f1da2044f49cdd elected leader d3f1da2044f49cdd at term 3"}
	{"level":"info","ts":"2024-07-29T18:24:51.100312Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d3f1da2044f49cdd","local-member-attributes":"{Name:multinode-976328 ClientURLs:[https://192.168.39.211:2379]}","request-path":"/0/members/d3f1da2044f49cdd/attributes","cluster-id":"a3f4522b5c780b58","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:24:51.10036Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:24:51.10095Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:24:51.102617Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.211:2379"}
	{"level":"info","ts":"2024-07-29T18:24:51.102691Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:24:51.103364Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:24:51.108332Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:26:29 up 9 min,  0 users,  load average: 0.19, 0.21, 0.11
	Linux multinode-976328 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f] <==
	I0729 18:22:11.746040       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:22:21.747611       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:22:21.747669       1 main.go:299] handling current node
	I0729 18:22:21.747685       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:22:21.747690       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:22:21.747859       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:22:21.747885       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:22:31.754126       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:22:31.754346       1 main.go:299] handling current node
	I0729 18:22:31.754382       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:22:31.754402       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:22:31.754552       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:22:31.754573       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:22:41.754826       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:22:41.754863       1 main.go:299] handling current node
	I0729 18:22:41.754901       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:22:41.754908       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:22:41.755017       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:22:41.755040       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:22:51.751017       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:22:51.751141       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:22:51.751366       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:22:51.751411       1 main.go:299] handling current node
	I0729 18:22:51.751441       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:22:51.751458       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [9604b38a357c90bcf05ce45f13f6a4fb7d4dc430b9394b7eb8154f86776bd074] <==
	I0729 18:25:45.444938       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:25:55.446121       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:25:55.446347       1 main.go:299] handling current node
	I0729 18:25:55.446418       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:25:55.446440       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:25:55.446607       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:25:55.446630       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:26:05.445641       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:26:05.445732       1 main.go:299] handling current node
	I0729 18:26:05.445753       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:26:05.445759       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:26:05.445899       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:26:05.445904       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:26:15.446891       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:26:15.447117       1 main.go:299] handling current node
	I0729 18:26:15.447200       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:26:15.447225       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:26:15.447363       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:26:15.447387       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.2.0/24] 
	I0729 18:26:25.445001       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:26:25.445063       1 main.go:299] handling current node
	I0729 18:26:25.445077       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:26:25.445083       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:26:25.445290       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:26:25.445317       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56] <==
	I0729 18:24:36.264701       1 options.go:221] external host was not specified, using 192.168.39.211
	I0729 18:24:36.271517       1 server.go:148] Version: v1.30.3
	I0729 18:24:36.271551       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0729 18:24:36.836219       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:36.836385       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 18:24:36.836460       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 18:24:36.844519       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:24:36.845785       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 18:24:36.845843       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 18:24:36.846053       1 instance.go:299] Using reconciler: lease
	W0729 18:24:36.852344       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:37.836913       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:37.837025       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:37.855076       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:39.271630       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:39.329860       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:39.346812       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:41.662752       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:42.035888       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:42.354126       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:46.008458       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:46.410713       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b9873fe03dfd66dac4612fc75aad173260796756b67a32a4e0c38b2ca71fba9c] <==
	I0729 18:24:52.532482       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 18:24:52.532901       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 18:24:52.533134       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 18:24:52.537533       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 18:24:52.548924       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 18:24:52.537569       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 18:24:52.549606       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 18:24:52.549994       1 aggregator.go:165] initial CRD sync complete...
	I0729 18:24:52.550752       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 18:24:52.550843       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 18:24:52.550868       1 cache.go:39] Caches are synced for autoregister controller
	I0729 18:24:52.551992       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0729 18:24:52.574827       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 18:24:53.444049       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 18:24:54.172647       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 18:24:54.290828       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 18:24:54.305892       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 18:24:54.369659       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 18:24:54.376877       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 18:25:05.752802       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 18:25:05.840046       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 18:26:09.649597       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0729 18:26:09.649787       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0729 18:26:09.651011       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0729 18:26:09.651120       1 timeout.go:142] post-timeout activity - time-elapsed: 1.631536ms, GET "/api/v1/services" result: <nil>
	
	
	==> kube-controller-manager [3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9] <==
	I0729 18:18:53.993446       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-976328-m02\" does not exist"
	I0729 18:18:54.070668       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-976328-m02" podCIDRs=["10.244.1.0/24"]
	I0729 18:18:57.169451       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-976328-m02"
	I0729 18:19:13.259406       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:19:15.563841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.898995ms"
	I0729 18:19:15.586110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.121072ms"
	I0729 18:19:15.599935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.550577ms"
	I0729 18:19:15.600051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.795µs"
	I0729 18:19:17.619417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.481893ms"
	I0729 18:19:17.619608       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.664µs"
	I0729 18:19:17.885327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.434412ms"
	I0729 18:19:17.886361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.702µs"
	I0729 18:19:48.178917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:19:48.179061       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-976328-m03\" does not exist"
	I0729 18:19:48.223503       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-976328-m03" podCIDRs=["10.244.2.0/24"]
	I0729 18:19:52.190570       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-976328-m03"
	I0729 18:20:06.992053       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:20:34.280747       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:20:35.378442       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:20:35.378538       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-976328-m03\" does not exist"
	I0729 18:20:35.392694       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-976328-m03" podCIDRs=["10.244.3.0/24"]
	I0729 18:20:53.136750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m03"
	I0729 18:21:37.245151       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:21:37.319893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.941452ms"
	I0729 18:21:37.321560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="202.224µs"
	
	
	==> kube-controller-manager [54847580765e8923b115405b28cd71d12c1fe0b6a89f33710e310d7720dc35bb] <==
	I0729 18:25:06.489466       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 18:25:06.489501       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 18:25:17.989954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.027µs"
	I0729 18:25:28.459649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.34822ms"
	I0729 18:25:28.459887       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.73µs"
	I0729 18:25:28.470609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.380271ms"
	I0729 18:25:28.470759       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.484µs"
	I0729 18:25:32.721848       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-976328-m02\" does not exist"
	I0729 18:25:32.742763       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-976328-m02" podCIDRs=["10.244.1.0/24"]
	I0729 18:25:34.636043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.515µs"
	I0729 18:25:34.651354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.177µs"
	I0729 18:25:34.659589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.842µs"
	I0729 18:25:34.675520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.787µs"
	I0729 18:25:34.687345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.426µs"
	I0729 18:25:34.692658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.977µs"
	I0729 18:25:50.470040       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:25:50.509004       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.409µs"
	I0729 18:25:50.521577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.874µs"
	I0729 18:25:51.956334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.555593ms"
	I0729 18:25:51.956566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.882µs"
	I0729 18:26:07.728294       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:26:08.812970       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-976328-m03\" does not exist"
	I0729 18:26:08.813137       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:26:08.823757       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-976328-m03" podCIDRs=["10.244.2.0/24"]
	I0729 18:26:26.543547       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m03"
	
	
	==> kube-proxy [d01c9ad4df1fab0ab613b4207329e5ba2139310e3edf3d01a207fd27a46a48df] <==
	I0729 18:24:43.442347       1 server_linux.go:69] "Using iptables proxy"
	E0729 18:24:47.526647       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/multinode-976328\": dial tcp 192.168.39.211:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.211:46198->192.168.39.211:8443: read: connection reset by peer"
	E0729 18:24:48.707078       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/multinode-976328\": dial tcp 192.168.39.211:8443: connect: connection refused"
	I0729 18:24:52.554978       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.211"]
	I0729 18:24:52.623431       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:24:52.623473       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:24:52.623490       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:24:52.625986       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:24:52.626296       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:24:52.626542       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:24:52.627933       1 config.go:192] "Starting service config controller"
	I0729 18:24:52.627986       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:24:52.628024       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:24:52.628040       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:24:52.628745       1 config.go:319] "Starting node config controller"
	I0729 18:24:52.630237       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:24:52.728802       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:24:52.728818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:24:52.730367       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac] <==
	I0729 18:18:08.719117       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:18:08.768098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.211"]
	I0729 18:18:08.897958       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:18:08.898008       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:18:08.898026       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:18:08.905548       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:18:08.906950       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:18:08.907104       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:18:08.910558       1 config.go:192] "Starting service config controller"
	I0729 18:18:08.911104       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:18:08.911306       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:18:08.911315       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:18:08.913853       1 config.go:319] "Starting node config controller"
	I0729 18:18:08.913860       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:18:09.011564       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:18:09.011619       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:18:09.014775       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [192bdf369e557b0185eaaf55a2b84baee3b192d72ca04820b103f49a17302a92] <==
	I0729 18:24:50.466788       1 serving.go:380] Generated self-signed cert in-memory
	W0729 18:24:52.487443       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 18:24:52.487541       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:24:52.487577       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:24:52.487655       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:24:52.552582       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 18:24:52.552677       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:24:52.558969       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 18:24:52.577553       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:24:52.577643       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 18:24:52.577672       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 18:24:52.678667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed] <==
	I0729 18:24:36.728896       1 serving.go:380] Generated self-signed cert in-memory
	W0729 18:24:47.524611       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.39.211:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.211:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.211:40772->192.168.39.211:8443: read: connection reset by peer
	W0729 18:24:47.524795       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:24:47.524825       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:24:47.545086       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 18:24:47.545126       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:24:47.546578       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:24:47.546639       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 18:24:47.546674       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 18:24:47.546698       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:24:47.546716       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0729 18:24:47.546799       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0729 18:24:47.547006       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 18:24:49 multinode-976328 kubelet[3871]: E0729 18:24:49.990243    3871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.211:8443: connect: connection refused" node="multinode-976328"
	Jul 29 18:24:50 multinode-976328 kubelet[3871]: I0729 18:24:50.792153    3871 kubelet_node_status.go:73] "Attempting to register node" node="multinode-976328"
	Jul 29 18:24:52 multinode-976328 kubelet[3871]: I0729 18:24:52.579805    3871 kubelet_node_status.go:112] "Node was previously registered" node="multinode-976328"
	Jul 29 18:24:52 multinode-976328 kubelet[3871]: I0729 18:24:52.580271    3871 kubelet_node_status.go:76] "Successfully registered node" node="multinode-976328"
	Jul 29 18:24:52 multinode-976328 kubelet[3871]: I0729 18:24:52.584810    3871 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 18:24:52 multinode-976328 kubelet[3871]: I0729 18:24:52.585859    3871 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.252126    3871 apiserver.go:52] "Watching apiserver"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.255402    3871 topology_manager.go:215] "Topology Admit Handler" podUID="d116a5b3-2d88-4c19-862a-ce4e6100b5c9" podNamespace="kube-system" podName="kube-proxy-5hqrk"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.255668    3871 topology_manager.go:215] "Topology Admit Handler" podUID="f226ace9-e1df-4171-bd7a-80c663032a34" podNamespace="kube-system" podName="kindnet-ttmqz"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.255765    3871 topology_manager.go:215] "Topology Admit Handler" podUID="c72421fc-93fc-42d7-8a68-93fe1f74686f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sls9j"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.256311    3871 topology_manager.go:215] "Topology Admit Handler" podUID="9f0e11b0-fc92-4d04-961e-d0888214b2b6" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.256486    3871 topology_manager.go:215] "Topology Admit Handler" podUID="01e64b3b-f8da-4ecf-8914-f0bcea794606" podNamespace="default" podName="busybox-fc5497c4f-mdnj5"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.295024    3871 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.392332    3871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9f0e11b0-fc92-4d04-961e-d0888214b2b6-tmp\") pod \"storage-provisioner\" (UID: \"9f0e11b0-fc92-4d04-961e-d0888214b2b6\") " pod="kube-system/storage-provisioner"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.392397    3871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f226ace9-e1df-4171-bd7a-80c663032a34-cni-cfg\") pod \"kindnet-ttmqz\" (UID: \"f226ace9-e1df-4171-bd7a-80c663032a34\") " pod="kube-system/kindnet-ttmqz"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.392416    3871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f226ace9-e1df-4171-bd7a-80c663032a34-lib-modules\") pod \"kindnet-ttmqz\" (UID: \"f226ace9-e1df-4171-bd7a-80c663032a34\") " pod="kube-system/kindnet-ttmqz"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.392462    3871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f226ace9-e1df-4171-bd7a-80c663032a34-xtables-lock\") pod \"kindnet-ttmqz\" (UID: \"f226ace9-e1df-4171-bd7a-80c663032a34\") " pod="kube-system/kindnet-ttmqz"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.392492    3871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d116a5b3-2d88-4c19-862a-ce4e6100b5c9-xtables-lock\") pod \"kube-proxy-5hqrk\" (UID: \"d116a5b3-2d88-4c19-862a-ce4e6100b5c9\") " pod="kube-system/kube-proxy-5hqrk"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.392514    3871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d116a5b3-2d88-4c19-862a-ce4e6100b5c9-lib-modules\") pod \"kube-proxy-5hqrk\" (UID: \"d116a5b3-2d88-4c19-862a-ce4e6100b5c9\") " pod="kube-system/kube-proxy-5hqrk"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.560063    3871 scope.go:117] "RemoveContainer" containerID="166885d3e009fa3e0be8b5922d734ccb35202f737b5efbd4ececb4b8ac56acec"
	Jul 29 18:25:49 multinode-976328 kubelet[3871]: E0729 18:25:49.350277    3871 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:25:49 multinode-976328 kubelet[3871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:25:49 multinode-976328 kubelet[3871]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:25:49 multinode-976328 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:25:49 multinode-976328 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:26:29.057337  124969 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19339-88081/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-976328 -n multinode-976328
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-976328 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (335.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 stop
E0729 18:28:18.906988   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-976328 stop: exit status 82 (2m0.455065733s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-976328-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-976328 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-976328 status: exit status 3 (18.864316621s)

                                                
                                                
-- stdout --
	multinode-976328
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-976328-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:28:52.425130  125633 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.157:22: connect: no route to host
	E0729 18:28:52.425187  125633 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.157:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-976328 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-976328 -n multinode-976328
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-976328 logs -n 25: (1.442588181s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m02:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328:/home/docker/cp-test_multinode-976328-m02_multinode-976328.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n multinode-976328 sudo cat                                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-976328-m02_multinode-976328.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m02:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03:/home/docker/cp-test_multinode-976328-m02_multinode-976328-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n multinode-976328-m03 sudo cat                                   | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-976328-m02_multinode-976328-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp testdata/cp-test.txt                                                | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m03:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile737376291/001/cp-test_multinode-976328-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m03:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328:/home/docker/cp-test_multinode-976328-m03_multinode-976328.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n multinode-976328 sudo cat                                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-976328-m03_multinode-976328.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-976328 cp multinode-976328-m03:/home/docker/cp-test.txt                       | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m02:/home/docker/cp-test_multinode-976328-m03_multinode-976328-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n                                                                 | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | multinode-976328-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-976328 ssh -n multinode-976328-m02 sudo cat                                   | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /home/docker/cp-test_multinode-976328-m03_multinode-976328-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-976328 node stop m03                                                          | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	| node    | multinode-976328 node start                                                             | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-976328                                                                | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	| stop    | -p multinode-976328                                                                     | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC |                     |
	| start   | -p multinode-976328                                                                     | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:22 UTC | 29 Jul 24 18:26 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-976328                                                                | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:26 UTC |                     |
	| node    | multinode-976328 node delete                                                            | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:26 UTC | 29 Jul 24 18:26 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-976328 stop                                                                   | multinode-976328 | jenkins | v1.33.1 | 29 Jul 24 18:26 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:22:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:22:57.481581  123843 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:22:57.481722  123843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:57.481733  123843 out.go:304] Setting ErrFile to fd 2...
	I0729 18:22:57.481739  123843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:22:57.481912  123843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:22:57.482532  123843 out.go:298] Setting JSON to false
	I0729 18:22:57.483457  123843 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11097,"bootTime":1722266280,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:22:57.483525  123843 start.go:139] virtualization: kvm guest
	I0729 18:22:57.486338  123843 out.go:177] * [multinode-976328] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:22:57.487703  123843 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:22:57.487712  123843 notify.go:220] Checking for updates...
	I0729 18:22:57.490419  123843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:22:57.491646  123843 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:22:57.493115  123843 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:22:57.494345  123843 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:22:57.495595  123843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:22:57.497355  123843 config.go:182] Loaded profile config "multinode-976328": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:22:57.497451  123843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:22:57.497862  123843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:57.497922  123843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:57.512757  123843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I0729 18:22:57.513158  123843 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:57.513703  123843 main.go:141] libmachine: Using API Version  1
	I0729 18:22:57.513725  123843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:57.514055  123843 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:57.514212  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:22:57.547831  123843 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:22:57.549092  123843 start.go:297] selected driver: kvm2
	I0729 18:22:57.549116  123843 start.go:901] validating driver "kvm2" against &{Name:multinode-976328 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-976328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:57.549228  123843 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:22:57.549567  123843 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:57.549671  123843 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:22:57.563944  123843 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:22:57.564639  123843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:22:57.564694  123843 cni.go:84] Creating CNI manager for ""
	I0729 18:22:57.564705  123843 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 18:22:57.564762  123843 start.go:340] cluster config:
	{Name:multinode-976328 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-976328 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:22:57.564934  123843 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:22:57.566539  123843 out.go:177] * Starting "multinode-976328" primary control-plane node in "multinode-976328" cluster
	I0729 18:22:57.567662  123843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:22:57.567713  123843 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:22:57.567733  123843 cache.go:56] Caching tarball of preloaded images
	I0729 18:22:57.567841  123843 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:22:57.567853  123843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:22:57.568003  123843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/config.json ...
	I0729 18:22:57.568231  123843 start.go:360] acquireMachinesLock for multinode-976328: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:22:57.568277  123843 start.go:364] duration metric: took 26.231µs to acquireMachinesLock for "multinode-976328"
	I0729 18:22:57.568298  123843 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:22:57.568308  123843 fix.go:54] fixHost starting: 
	I0729 18:22:57.568615  123843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:22:57.568652  123843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:22:57.582692  123843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0729 18:22:57.583075  123843 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:22:57.583527  123843 main.go:141] libmachine: Using API Version  1
	I0729 18:22:57.583551  123843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:22:57.583891  123843 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:22:57.584088  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:22:57.584283  123843 main.go:141] libmachine: (multinode-976328) Calling .GetState
	I0729 18:22:57.585875  123843 fix.go:112] recreateIfNeeded on multinode-976328: state=Running err=<nil>
	W0729 18:22:57.585891  123843 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:22:57.588411  123843 out.go:177] * Updating the running kvm2 "multinode-976328" VM ...
	I0729 18:22:57.589883  123843 machine.go:94] provisionDockerMachine start ...
	I0729 18:22:57.589907  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:22:57.590114  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:57.592474  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.592923  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:57.592947  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.593091  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:22:57.593270  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.593426  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.593560  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:22:57.593730  123843 main.go:141] libmachine: Using SSH client type: native
	I0729 18:22:57.593927  123843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0729 18:22:57.593939  123843 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:22:57.698274  123843 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-976328
	
	I0729 18:22:57.698332  123843 main.go:141] libmachine: (multinode-976328) Calling .GetMachineName
	I0729 18:22:57.698624  123843 buildroot.go:166] provisioning hostname "multinode-976328"
	I0729 18:22:57.698650  123843 main.go:141] libmachine: (multinode-976328) Calling .GetMachineName
	I0729 18:22:57.698875  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:57.701474  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.701810  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:57.701841  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.702042  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:22:57.702228  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.702400  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.702561  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:22:57.702707  123843 main.go:141] libmachine: Using SSH client type: native
	I0729 18:22:57.702897  123843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0729 18:22:57.702915  123843 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-976328 && echo "multinode-976328" | sudo tee /etc/hostname
	I0729 18:22:57.826863  123843 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-976328
	
	I0729 18:22:57.826896  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:57.829596  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.830002  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:57.830038  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.830231  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:22:57.830419  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.830598  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:57.830722  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:22:57.830919  123843 main.go:141] libmachine: Using SSH client type: native
	I0729 18:22:57.831135  123843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0729 18:22:57.831159  123843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-976328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-976328/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-976328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:22:57.937684  123843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:22:57.937724  123843 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 18:22:57.937748  123843 buildroot.go:174] setting up certificates
	I0729 18:22:57.937756  123843 provision.go:84] configureAuth start
	I0729 18:22:57.937765  123843 main.go:141] libmachine: (multinode-976328) Calling .GetMachineName
	I0729 18:22:57.938027  123843 main.go:141] libmachine: (multinode-976328) Calling .GetIP
	I0729 18:22:57.940568  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.940977  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:57.941012  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.941191  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:57.943603  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.943903  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:57.943953  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:57.944053  123843 provision.go:143] copyHostCerts
	I0729 18:22:57.944085  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:22:57.944114  123843 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 18:22:57.944122  123843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:22:57.944188  123843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 18:22:57.944260  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:22:57.944279  123843 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 18:22:57.944283  123843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:22:57.944308  123843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 18:22:57.944392  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:22:57.944412  123843 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 18:22:57.944416  123843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:22:57.944440  123843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 18:22:57.944483  123843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.multinode-976328 san=[127.0.0.1 192.168.39.211 localhost minikube multinode-976328]
	I0729 18:22:58.035014  123843 provision.go:177] copyRemoteCerts
	I0729 18:22:58.035092  123843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:22:58.035118  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:58.037768  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:58.038184  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:58.038216  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:58.038369  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:22:58.038567  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:58.038708  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:22:58.038868  123843 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/multinode-976328/id_rsa Username:docker}
	I0729 18:22:58.119217  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 18:22:58.119291  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:22:58.144225  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 18:22:58.144282  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 18:22:58.168526  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 18:22:58.168588  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:22:58.192738  123843 provision.go:87] duration metric: took 254.967854ms to configureAuth
	I0729 18:22:58.192764  123843 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:22:58.193055  123843 config.go:182] Loaded profile config "multinode-976328": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:22:58.193153  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:22:58.195804  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:58.196236  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:22:58.196261  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:22:58.196460  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:22:58.196661  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:58.196814  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:22:58.196962  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:22:58.197093  123843 main.go:141] libmachine: Using SSH client type: native
	I0729 18:22:58.197260  123843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0729 18:22:58.197274  123843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:24:29.024042  123843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:24:29.024105  123843 machine.go:97] duration metric: took 1m31.434198984s to provisionDockerMachine
	I0729 18:24:29.024124  123843 start.go:293] postStartSetup for "multinode-976328" (driver="kvm2")
	I0729 18:24:29.024139  123843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:24:29.024165  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:24:29.024541  123843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:24:29.024583  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:24:29.027993  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.028432  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:29.028454  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.028615  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:24:29.028793  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:24:29.029035  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:24:29.029217  123843 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/multinode-976328/id_rsa Username:docker}
	I0729 18:24:29.112235  123843 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:24:29.116688  123843 command_runner.go:130] > NAME=Buildroot
	I0729 18:24:29.116713  123843 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 18:24:29.116719  123843 command_runner.go:130] > ID=buildroot
	I0729 18:24:29.116726  123843 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 18:24:29.116732  123843 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 18:24:29.116767  123843 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:24:29.116785  123843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:24:29.116886  123843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:24:29.116985  123843 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:24:29.117001  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /etc/ssl/certs/952822.pem
	I0729 18:24:29.117115  123843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:24:29.126517  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:24:29.149882  123843 start.go:296] duration metric: took 125.741298ms for postStartSetup
	I0729 18:24:29.149940  123843 fix.go:56] duration metric: took 1m31.58163236s for fixHost
	I0729 18:24:29.149972  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:24:29.152689  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.153004  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:29.153033  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.153161  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:24:29.153357  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:24:29.153541  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:24:29.153685  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:24:29.153893  123843 main.go:141] libmachine: Using SSH client type: native
	I0729 18:24:29.154077  123843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0729 18:24:29.154092  123843 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:24:29.257458  123843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277469.232257137
	
	I0729 18:24:29.257489  123843 fix.go:216] guest clock: 1722277469.232257137
	I0729 18:24:29.257500  123843 fix.go:229] Guest: 2024-07-29 18:24:29.232257137 +0000 UTC Remote: 2024-07-29 18:24:29.149949853 +0000 UTC m=+91.704777228 (delta=82.307284ms)
	I0729 18:24:29.257562  123843 fix.go:200] guest clock delta is within tolerance: 82.307284ms
	I0729 18:24:29.257574  123843 start.go:83] releasing machines lock for "multinode-976328", held for 1m31.689283817s
	I0729 18:24:29.257627  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:24:29.257908  123843 main.go:141] libmachine: (multinode-976328) Calling .GetIP
	I0729 18:24:29.260505  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.260886  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:29.260915  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.261069  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:24:29.261556  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:24:29.261765  123843 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:24:29.261882  123843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:24:29.261931  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:24:29.261992  123843 ssh_runner.go:195] Run: cat /version.json
	I0729 18:24:29.262013  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:24:29.264582  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.264942  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.265000  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:29.265026  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.265156  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:24:29.265371  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:24:29.265434  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:29.265461  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:29.265593  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:24:29.265627  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:24:29.265756  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:24:29.265762  123843 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/multinode-976328/id_rsa Username:docker}
	I0729 18:24:29.265912  123843 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:24:29.266065  123843 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/multinode-976328/id_rsa Username:docker}
	I0729 18:24:29.360987  123843 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 18:24:29.361587  123843 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0729 18:24:29.361764  123843 ssh_runner.go:195] Run: systemctl --version
	I0729 18:24:29.367349  123843 command_runner.go:130] > systemd 252 (252)
	I0729 18:24:29.367384  123843 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 18:24:29.367441  123843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:24:29.536713  123843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 18:24:29.551746  123843 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 18:24:29.551820  123843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:24:29.551899  123843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:24:29.562335  123843 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 18:24:29.562361  123843 start.go:495] detecting cgroup driver to use...
	I0729 18:24:29.562419  123843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:24:29.581046  123843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:24:29.600639  123843 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:24:29.600715  123843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:24:29.619374  123843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:24:29.641249  123843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:24:29.790698  123843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:24:29.932104  123843 docker.go:233] disabling docker service ...
	I0729 18:24:29.932187  123843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:24:29.949496  123843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:24:29.962678  123843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:24:30.103174  123843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:24:30.246717  123843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:24:30.261295  123843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:24:30.279497  123843 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 18:24:30.279547  123843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:24:30.279592  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.290159  123843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:24:30.290230  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.300570  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.311042  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.321336  123843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:24:30.332747  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.343166  123843 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.354270  123843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:24:30.364877  123843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:24:30.374082  123843 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 18:24:30.374137  123843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:24:30.383443  123843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:24:30.522540  123843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:24:34.871239  123843 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.348655457s)
	I0729 18:24:34.871274  123843 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:24:34.871331  123843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:24:34.876043  123843 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 18:24:34.876070  123843 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 18:24:34.876081  123843 command_runner.go:130] > Device: 0,22	Inode: 1355        Links: 1
	I0729 18:24:34.876091  123843 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 18:24:34.876102  123843 command_runner.go:130] > Access: 2024-07-29 18:24:34.734267717 +0000
	I0729 18:24:34.876109  123843 command_runner.go:130] > Modify: 2024-07-29 18:24:34.734267717 +0000
	I0729 18:24:34.876116  123843 command_runner.go:130] > Change: 2024-07-29 18:24:34.734267717 +0000
	I0729 18:24:34.876120  123843 command_runner.go:130] >  Birth: -
	I0729 18:24:34.876139  123843 start.go:563] Will wait 60s for crictl version
	I0729 18:24:34.876179  123843 ssh_runner.go:195] Run: which crictl
	I0729 18:24:34.879756  123843 command_runner.go:130] > /usr/bin/crictl
	I0729 18:24:34.879826  123843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:24:34.916240  123843 command_runner.go:130] > Version:  0.1.0
	I0729 18:24:34.916265  123843 command_runner.go:130] > RuntimeName:  cri-o
	I0729 18:24:34.916273  123843 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 18:24:34.916281  123843 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 18:24:34.916302  123843 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:24:34.916382  123843 ssh_runner.go:195] Run: crio --version
	I0729 18:24:34.947447  123843 command_runner.go:130] > crio version 1.29.1
	I0729 18:24:34.947476  123843 command_runner.go:130] > Version:        1.29.1
	I0729 18:24:34.947484  123843 command_runner.go:130] > GitCommit:      unknown
	I0729 18:24:34.947490  123843 command_runner.go:130] > GitCommitDate:  unknown
	I0729 18:24:34.947497  123843 command_runner.go:130] > GitTreeState:   clean
	I0729 18:24:34.947506  123843 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0729 18:24:34.947513  123843 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 18:24:34.947518  123843 command_runner.go:130] > Compiler:       gc
	I0729 18:24:34.947525  123843 command_runner.go:130] > Platform:       linux/amd64
	I0729 18:24:34.947535  123843 command_runner.go:130] > Linkmode:       dynamic
	I0729 18:24:34.947540  123843 command_runner.go:130] > BuildTags:      
	I0729 18:24:34.947545  123843 command_runner.go:130] >   containers_image_ostree_stub
	I0729 18:24:34.947552  123843 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 18:24:34.947556  123843 command_runner.go:130] >   btrfs_noversion
	I0729 18:24:34.947575  123843 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 18:24:34.947580  123843 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 18:24:34.947583  123843 command_runner.go:130] >   seccomp
	I0729 18:24:34.947587  123843 command_runner.go:130] > LDFlags:          unknown
	I0729 18:24:34.947590  123843 command_runner.go:130] > SeccompEnabled:   true
	I0729 18:24:34.947594  123843 command_runner.go:130] > AppArmorEnabled:  false
	I0729 18:24:34.947863  123843 ssh_runner.go:195] Run: crio --version
	I0729 18:24:34.974270  123843 command_runner.go:130] > crio version 1.29.1
	I0729 18:24:34.974293  123843 command_runner.go:130] > Version:        1.29.1
	I0729 18:24:34.974331  123843 command_runner.go:130] > GitCommit:      unknown
	I0729 18:24:34.974339  123843 command_runner.go:130] > GitCommitDate:  unknown
	I0729 18:24:34.974345  123843 command_runner.go:130] > GitTreeState:   clean
	I0729 18:24:34.974352  123843 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0729 18:24:34.974357  123843 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 18:24:34.974361  123843 command_runner.go:130] > Compiler:       gc
	I0729 18:24:34.974365  123843 command_runner.go:130] > Platform:       linux/amd64
	I0729 18:24:34.974372  123843 command_runner.go:130] > Linkmode:       dynamic
	I0729 18:24:34.974377  123843 command_runner.go:130] > BuildTags:      
	I0729 18:24:34.974384  123843 command_runner.go:130] >   containers_image_ostree_stub
	I0729 18:24:34.974389  123843 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 18:24:34.974400  123843 command_runner.go:130] >   btrfs_noversion
	I0729 18:24:34.974408  123843 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 18:24:34.974415  123843 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 18:24:34.974424  123843 command_runner.go:130] >   seccomp
	I0729 18:24:34.974434  123843 command_runner.go:130] > LDFlags:          unknown
	I0729 18:24:34.974442  123843 command_runner.go:130] > SeccompEnabled:   true
	I0729 18:24:34.974449  123843 command_runner.go:130] > AppArmorEnabled:  false
	I0729 18:24:34.977557  123843 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:24:34.978912  123843 main.go:141] libmachine: (multinode-976328) Calling .GetIP
	I0729 18:24:34.981831  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:34.982267  123843 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:24:34.982295  123843 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:24:34.982473  123843 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:24:34.986696  123843 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 18:24:34.986773  123843 kubeadm.go:883] updating cluster {Name:multinode-976328 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-976328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:24:34.986892  123843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:24:34.986938  123843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:24:35.033624  123843 command_runner.go:130] > {
	I0729 18:24:35.033650  123843 command_runner.go:130] >   "images": [
	I0729 18:24:35.033654  123843 command_runner.go:130] >     {
	I0729 18:24:35.033663  123843 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 18:24:35.033668  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.033673  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 18:24:35.033677  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033681  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.033688  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 18:24:35.033696  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 18:24:35.033701  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033706  123843 command_runner.go:130] >       "size": "87165492",
	I0729 18:24:35.033712  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.033716  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.033725  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.033731  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.033734  123843 command_runner.go:130] >     },
	I0729 18:24:35.033738  123843 command_runner.go:130] >     {
	I0729 18:24:35.033743  123843 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 18:24:35.033748  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.033753  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 18:24:35.033771  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033778  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.033784  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 18:24:35.033792  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 18:24:35.033795  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033804  123843 command_runner.go:130] >       "size": "87174707",
	I0729 18:24:35.033810  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.033817  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.033821  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.033825  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.033829  123843 command_runner.go:130] >     },
	I0729 18:24:35.033832  123843 command_runner.go:130] >     {
	I0729 18:24:35.033838  123843 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 18:24:35.033842  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.033847  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 18:24:35.033850  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033854  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.033863  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 18:24:35.033870  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 18:24:35.033874  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033878  123843 command_runner.go:130] >       "size": "1363676",
	I0729 18:24:35.033881  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.033885  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.033889  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.033893  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.033896  123843 command_runner.go:130] >     },
	I0729 18:24:35.033899  123843 command_runner.go:130] >     {
	I0729 18:24:35.033905  123843 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 18:24:35.033909  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.033914  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 18:24:35.033917  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033921  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.033928  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 18:24:35.033945  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 18:24:35.033950  123843 command_runner.go:130] >       ],
	I0729 18:24:35.033954  123843 command_runner.go:130] >       "size": "31470524",
	I0729 18:24:35.033962  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.033968  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.033972  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.033976  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.033979  123843 command_runner.go:130] >     },
	I0729 18:24:35.033982  123843 command_runner.go:130] >     {
	I0729 18:24:35.033988  123843 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 18:24:35.033995  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034000  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 18:24:35.034006  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034009  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034016  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 18:24:35.034025  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 18:24:35.034040  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034047  123843 command_runner.go:130] >       "size": "61245718",
	I0729 18:24:35.034050  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.034055  123843 command_runner.go:130] >       "username": "nonroot",
	I0729 18:24:35.034059  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034063  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034066  123843 command_runner.go:130] >     },
	I0729 18:24:35.034070  123843 command_runner.go:130] >     {
	I0729 18:24:35.034076  123843 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 18:24:35.034082  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034086  123843 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 18:24:35.034090  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034093  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034100  123843 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 18:24:35.034109  123843 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 18:24:35.034112  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034117  123843 command_runner.go:130] >       "size": "150779692",
	I0729 18:24:35.034121  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.034125  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.034131  123843 command_runner.go:130] >       },
	I0729 18:24:35.034134  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034138  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034142  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034149  123843 command_runner.go:130] >     },
	I0729 18:24:35.034155  123843 command_runner.go:130] >     {
	I0729 18:24:35.034161  123843 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 18:24:35.034167  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034176  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 18:24:35.034182  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034186  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034193  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 18:24:35.034203  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 18:24:35.034207  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034213  123843 command_runner.go:130] >       "size": "117609954",
	I0729 18:24:35.034217  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.034223  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.034226  123843 command_runner.go:130] >       },
	I0729 18:24:35.034230  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034234  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034238  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034241  123843 command_runner.go:130] >     },
	I0729 18:24:35.034244  123843 command_runner.go:130] >     {
	I0729 18:24:35.034250  123843 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 18:24:35.034255  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034260  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 18:24:35.034265  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034269  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034289  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 18:24:35.034298  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 18:24:35.034302  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034306  123843 command_runner.go:130] >       "size": "112198984",
	I0729 18:24:35.034311  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.034315  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.034318  123843 command_runner.go:130] >       },
	I0729 18:24:35.034322  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034325  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034328  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034331  123843 command_runner.go:130] >     },
	I0729 18:24:35.034334  123843 command_runner.go:130] >     {
	I0729 18:24:35.034345  123843 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 18:24:35.034349  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034354  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 18:24:35.034357  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034361  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034370  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 18:24:35.034376  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 18:24:35.034379  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034383  123843 command_runner.go:130] >       "size": "85953945",
	I0729 18:24:35.034386  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.034390  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034393  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034397  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034400  123843 command_runner.go:130] >     },
	I0729 18:24:35.034403  123843 command_runner.go:130] >     {
	I0729 18:24:35.034409  123843 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 18:24:35.034413  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034418  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 18:24:35.034424  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034428  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034436  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 18:24:35.034445  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 18:24:35.034449  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034453  123843 command_runner.go:130] >       "size": "63051080",
	I0729 18:24:35.034458  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.034462  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.034465  123843 command_runner.go:130] >       },
	I0729 18:24:35.034469  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034473  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034478  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.034482  123843 command_runner.go:130] >     },
	I0729 18:24:35.034485  123843 command_runner.go:130] >     {
	I0729 18:24:35.034499  123843 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 18:24:35.034502  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.034509  123843 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 18:24:35.034512  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034521  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.034530  123843 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 18:24:35.034537  123843 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 18:24:35.034543  123843 command_runner.go:130] >       ],
	I0729 18:24:35.034546  123843 command_runner.go:130] >       "size": "750414",
	I0729 18:24:35.034558  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.034561  123843 command_runner.go:130] >         "value": "65535"
	I0729 18:24:35.034565  123843 command_runner.go:130] >       },
	I0729 18:24:35.034569  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.034574  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.034578  123843 command_runner.go:130] >       "pinned": true
	I0729 18:24:35.034581  123843 command_runner.go:130] >     }
	I0729 18:24:35.034587  123843 command_runner.go:130] >   ]
	I0729 18:24:35.034594  123843 command_runner.go:130] > }
	I0729 18:24:35.034951  123843 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:24:35.034966  123843 crio.go:433] Images already preloaded, skipping extraction
	I0729 18:24:35.035017  123843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:24:35.067287  123843 command_runner.go:130] > {
	I0729 18:24:35.067309  123843 command_runner.go:130] >   "images": [
	I0729 18:24:35.067313  123843 command_runner.go:130] >     {
	I0729 18:24:35.067325  123843 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 18:24:35.067330  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067344  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 18:24:35.067351  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067355  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067365  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 18:24:35.067373  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 18:24:35.067377  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067382  123843 command_runner.go:130] >       "size": "87165492",
	I0729 18:24:35.067386  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.067393  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067398  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067402  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067407  123843 command_runner.go:130] >     },
	I0729 18:24:35.067411  123843 command_runner.go:130] >     {
	I0729 18:24:35.067416  123843 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 18:24:35.067423  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067428  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 18:24:35.067432  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067436  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067442  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 18:24:35.067451  123843 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 18:24:35.067455  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067459  123843 command_runner.go:130] >       "size": "87174707",
	I0729 18:24:35.067464  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.067470  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067475  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067478  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067481  123843 command_runner.go:130] >     },
	I0729 18:24:35.067485  123843 command_runner.go:130] >     {
	I0729 18:24:35.067491  123843 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 18:24:35.067496  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067500  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 18:24:35.067504  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067508  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067518  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 18:24:35.067527  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 18:24:35.067531  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067542  123843 command_runner.go:130] >       "size": "1363676",
	I0729 18:24:35.067549  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.067557  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067565  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067572  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067575  123843 command_runner.go:130] >     },
	I0729 18:24:35.067579  123843 command_runner.go:130] >     {
	I0729 18:24:35.067585  123843 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 18:24:35.067591  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067596  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 18:24:35.067602  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067606  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067616  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 18:24:35.067632  123843 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 18:24:35.067638  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067642  123843 command_runner.go:130] >       "size": "31470524",
	I0729 18:24:35.067648  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.067652  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067658  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067661  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067667  123843 command_runner.go:130] >     },
	I0729 18:24:35.067670  123843 command_runner.go:130] >     {
	I0729 18:24:35.067678  123843 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 18:24:35.067685  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067690  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 18:24:35.067696  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067700  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067709  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 18:24:35.067718  123843 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 18:24:35.067723  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067727  123843 command_runner.go:130] >       "size": "61245718",
	I0729 18:24:35.067733  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.067738  123843 command_runner.go:130] >       "username": "nonroot",
	I0729 18:24:35.067744  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067747  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067753  123843 command_runner.go:130] >     },
	I0729 18:24:35.067760  123843 command_runner.go:130] >     {
	I0729 18:24:35.067768  123843 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 18:24:35.067775  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067780  123843 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 18:24:35.067786  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067789  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067805  123843 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 18:24:35.067813  123843 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 18:24:35.067819  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067822  123843 command_runner.go:130] >       "size": "150779692",
	I0729 18:24:35.067829  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.067832  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.067839  123843 command_runner.go:130] >       },
	I0729 18:24:35.067846  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067850  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067855  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067859  123843 command_runner.go:130] >     },
	I0729 18:24:35.067864  123843 command_runner.go:130] >     {
	I0729 18:24:35.067870  123843 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 18:24:35.067876  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067881  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 18:24:35.067886  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067891  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.067900  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 18:24:35.067914  123843 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 18:24:35.067919  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067924  123843 command_runner.go:130] >       "size": "117609954",
	I0729 18:24:35.067930  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.067934  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.067937  123843 command_runner.go:130] >       },
	I0729 18:24:35.067943  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.067947  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.067953  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.067956  123843 command_runner.go:130] >     },
	I0729 18:24:35.067961  123843 command_runner.go:130] >     {
	I0729 18:24:35.067967  123843 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 18:24:35.067979  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.067986  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 18:24:35.067992  123843 command_runner.go:130] >       ],
	I0729 18:24:35.067996  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.068019  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 18:24:35.068029  123843 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 18:24:35.068036  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068039  123843 command_runner.go:130] >       "size": "112198984",
	I0729 18:24:35.068045  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.068049  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.068054  123843 command_runner.go:130] >       },
	I0729 18:24:35.068058  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.068064  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.068068  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.068073  123843 command_runner.go:130] >     },
	I0729 18:24:35.068077  123843 command_runner.go:130] >     {
	I0729 18:24:35.068085  123843 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 18:24:35.068090  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.068094  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 18:24:35.068099  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068104  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.068112  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 18:24:35.068123  123843 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 18:24:35.068129  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068133  123843 command_runner.go:130] >       "size": "85953945",
	I0729 18:24:35.068139  123843 command_runner.go:130] >       "uid": null,
	I0729 18:24:35.068143  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.068148  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.068152  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.068157  123843 command_runner.go:130] >     },
	I0729 18:24:35.068161  123843 command_runner.go:130] >     {
	I0729 18:24:35.068169  123843 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 18:24:35.068175  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.068180  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 18:24:35.068186  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068189  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.068202  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 18:24:35.068212  123843 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 18:24:35.068216  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068222  123843 command_runner.go:130] >       "size": "63051080",
	I0729 18:24:35.068226  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.068232  123843 command_runner.go:130] >         "value": "0"
	I0729 18:24:35.068235  123843 command_runner.go:130] >       },
	I0729 18:24:35.068242  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.068245  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.068251  123843 command_runner.go:130] >       "pinned": false
	I0729 18:24:35.068254  123843 command_runner.go:130] >     },
	I0729 18:24:35.068258  123843 command_runner.go:130] >     {
	I0729 18:24:35.068263  123843 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 18:24:35.068269  123843 command_runner.go:130] >       "repoTags": [
	I0729 18:24:35.068274  123843 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 18:24:35.068279  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068283  123843 command_runner.go:130] >       "repoDigests": [
	I0729 18:24:35.068291  123843 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 18:24:35.068300  123843 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 18:24:35.068305  123843 command_runner.go:130] >       ],
	I0729 18:24:35.068309  123843 command_runner.go:130] >       "size": "750414",
	I0729 18:24:35.068312  123843 command_runner.go:130] >       "uid": {
	I0729 18:24:35.068317  123843 command_runner.go:130] >         "value": "65535"
	I0729 18:24:35.068325  123843 command_runner.go:130] >       },
	I0729 18:24:35.068331  123843 command_runner.go:130] >       "username": "",
	I0729 18:24:35.068336  123843 command_runner.go:130] >       "spec": null,
	I0729 18:24:35.068341  123843 command_runner.go:130] >       "pinned": true
	I0729 18:24:35.068344  123843 command_runner.go:130] >     }
	I0729 18:24:35.068348  123843 command_runner.go:130] >   ]
	I0729 18:24:35.068351  123843 command_runner.go:130] > }
	I0729 18:24:35.068797  123843 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:24:35.068817  123843 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:24:35.068826  123843 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.30.3 crio true true} ...
	I0729 18:24:35.068937  123843 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-976328 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-976328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:24:35.069007  123843 ssh_runner.go:195] Run: crio config
	I0729 18:24:35.102694  123843 command_runner.go:130] ! time="2024-07-29 18:24:35.077156228Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 18:24:35.107891  123843 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 18:24:35.120804  123843 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 18:24:35.120830  123843 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 18:24:35.120840  123843 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 18:24:35.120845  123843 command_runner.go:130] > #
	I0729 18:24:35.120876  123843 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 18:24:35.120889  123843 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 18:24:35.120902  123843 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 18:24:35.120924  123843 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 18:24:35.120932  123843 command_runner.go:130] > # reload'.
	I0729 18:24:35.120945  123843 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 18:24:35.120958  123843 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 18:24:35.120969  123843 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 18:24:35.120975  123843 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 18:24:35.120980  123843 command_runner.go:130] > [crio]
	I0729 18:24:35.120987  123843 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 18:24:35.120995  123843 command_runner.go:130] > # containers images, in this directory.
	I0729 18:24:35.121001  123843 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 18:24:35.121013  123843 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 18:24:35.121020  123843 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 18:24:35.121027  123843 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 18:24:35.121033  123843 command_runner.go:130] > # imagestore = ""
	I0729 18:24:35.121039  123843 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 18:24:35.121046  123843 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 18:24:35.121053  123843 command_runner.go:130] > storage_driver = "overlay"
	I0729 18:24:35.121058  123843 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 18:24:35.121066  123843 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 18:24:35.121073  123843 command_runner.go:130] > storage_option = [
	I0729 18:24:35.121080  123843 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 18:24:35.121083  123843 command_runner.go:130] > ]
	I0729 18:24:35.121089  123843 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 18:24:35.121099  123843 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 18:24:35.121109  123843 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 18:24:35.121120  123843 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 18:24:35.121131  123843 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 18:24:35.121139  123843 command_runner.go:130] > # always happen on a node reboot
	I0729 18:24:35.121149  123843 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 18:24:35.121169  123843 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 18:24:35.121181  123843 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 18:24:35.121190  123843 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 18:24:35.121199  123843 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 18:24:35.121212  123843 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 18:24:35.121225  123843 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 18:24:35.121234  123843 command_runner.go:130] > # internal_wipe = true
	I0729 18:24:35.121254  123843 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 18:24:35.121266  123843 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 18:24:35.121275  123843 command_runner.go:130] > # internal_repair = false
	I0729 18:24:35.121286  123843 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 18:24:35.121297  123843 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 18:24:35.121309  123843 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 18:24:35.121319  123843 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 18:24:35.121330  123843 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 18:24:35.121338  123843 command_runner.go:130] > [crio.api]
	I0729 18:24:35.121349  123843 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 18:24:35.121360  123843 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 18:24:35.121370  123843 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 18:24:35.121378  123843 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 18:24:35.121390  123843 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 18:24:35.121400  123843 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 18:24:35.121410  123843 command_runner.go:130] > # stream_port = "0"
	I0729 18:24:35.121420  123843 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 18:24:35.121428  123843 command_runner.go:130] > # stream_enable_tls = false
	I0729 18:24:35.121436  123843 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 18:24:35.121446  123843 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 18:24:35.121470  123843 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 18:24:35.121483  123843 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 18:24:35.121492  123843 command_runner.go:130] > # minutes.
	I0729 18:24:35.121500  123843 command_runner.go:130] > # stream_tls_cert = ""
	I0729 18:24:35.121512  123843 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 18:24:35.121524  123843 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 18:24:35.121533  123843 command_runner.go:130] > # stream_tls_key = ""
	I0729 18:24:35.121546  123843 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 18:24:35.121563  123843 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 18:24:35.121606  123843 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 18:24:35.121612  123843 command_runner.go:130] > # stream_tls_ca = ""
	I0729 18:24:35.121619  123843 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 18:24:35.121625  123843 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 18:24:35.121632  123843 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 18:24:35.121638  123843 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 18:24:35.121645  123843 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 18:24:35.121657  123843 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 18:24:35.121663  123843 command_runner.go:130] > [crio.runtime]
	I0729 18:24:35.121669  123843 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 18:24:35.121676  123843 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 18:24:35.121680  123843 command_runner.go:130] > # "nofile=1024:2048"
	I0729 18:24:35.121687  123843 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 18:24:35.121693  123843 command_runner.go:130] > # default_ulimits = [
	I0729 18:24:35.121696  123843 command_runner.go:130] > # ]
	I0729 18:24:35.121704  123843 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 18:24:35.121710  123843 command_runner.go:130] > # no_pivot = false
	I0729 18:24:35.121716  123843 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 18:24:35.121724  123843 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 18:24:35.121731  123843 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 18:24:35.121738  123843 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 18:24:35.121746  123843 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 18:24:35.121752  123843 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 18:24:35.121758  123843 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 18:24:35.121762  123843 command_runner.go:130] > # Cgroup setting for conmon
	I0729 18:24:35.121771  123843 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 18:24:35.121775  123843 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 18:24:35.121781  123843 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 18:24:35.121788  123843 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 18:24:35.121796  123843 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 18:24:35.121803  123843 command_runner.go:130] > conmon_env = [
	I0729 18:24:35.121809  123843 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 18:24:35.121814  123843 command_runner.go:130] > ]
	I0729 18:24:35.121819  123843 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 18:24:35.121826  123843 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 18:24:35.121831  123843 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 18:24:35.121837  123843 command_runner.go:130] > # default_env = [
	I0729 18:24:35.121840  123843 command_runner.go:130] > # ]
	I0729 18:24:35.121847  123843 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 18:24:35.121855  123843 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 18:24:35.121861  123843 command_runner.go:130] > # selinux = false
	I0729 18:24:35.121867  123843 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 18:24:35.121875  123843 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 18:24:35.121885  123843 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 18:24:35.121891  123843 command_runner.go:130] > # seccomp_profile = ""
	I0729 18:24:35.121897  123843 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 18:24:35.121904  123843 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 18:24:35.121909  123843 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 18:24:35.121916  123843 command_runner.go:130] > # which might increase security.
	I0729 18:24:35.121920  123843 command_runner.go:130] > # This option is currently deprecated,
	I0729 18:24:35.121927  123843 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 18:24:35.121932  123843 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 18:24:35.121939  123843 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 18:24:35.121948  123843 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 18:24:35.121957  123843 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 18:24:35.121964  123843 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 18:24:35.121970  123843 command_runner.go:130] > # This option supports live configuration reload.
	I0729 18:24:35.121974  123843 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 18:24:35.121980  123843 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 18:24:35.121986  123843 command_runner.go:130] > # the cgroup blockio controller.
	I0729 18:24:35.121991  123843 command_runner.go:130] > # blockio_config_file = ""
	I0729 18:24:35.121999  123843 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 18:24:35.122005  123843 command_runner.go:130] > # blockio parameters.
	I0729 18:24:35.122009  123843 command_runner.go:130] > # blockio_reload = false
	I0729 18:24:35.122017  123843 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 18:24:35.122021  123843 command_runner.go:130] > # irqbalance daemon.
	I0729 18:24:35.122026  123843 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 18:24:35.122036  123843 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 18:24:35.122045  123843 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 18:24:35.122053  123843 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 18:24:35.122059  123843 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 18:24:35.122067  123843 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 18:24:35.122073  123843 command_runner.go:130] > # This option supports live configuration reload.
	I0729 18:24:35.122079  123843 command_runner.go:130] > # rdt_config_file = ""
	I0729 18:24:35.122084  123843 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 18:24:35.122090  123843 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 18:24:35.122123  123843 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 18:24:35.122129  123843 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 18:24:35.122135  123843 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 18:24:35.122147  123843 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 18:24:35.122153  123843 command_runner.go:130] > # will be added.
	I0729 18:24:35.122158  123843 command_runner.go:130] > # default_capabilities = [
	I0729 18:24:35.122163  123843 command_runner.go:130] > # 	"CHOWN",
	I0729 18:24:35.122167  123843 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 18:24:35.122172  123843 command_runner.go:130] > # 	"FSETID",
	I0729 18:24:35.122176  123843 command_runner.go:130] > # 	"FOWNER",
	I0729 18:24:35.122181  123843 command_runner.go:130] > # 	"SETGID",
	I0729 18:24:35.122185  123843 command_runner.go:130] > # 	"SETUID",
	I0729 18:24:35.122191  123843 command_runner.go:130] > # 	"SETPCAP",
	I0729 18:24:35.122194  123843 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 18:24:35.122200  123843 command_runner.go:130] > # 	"KILL",
	I0729 18:24:35.122203  123843 command_runner.go:130] > # ]
	I0729 18:24:35.122211  123843 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 18:24:35.122219  123843 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 18:24:35.122225  123843 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 18:24:35.122231  123843 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 18:24:35.122238  123843 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 18:24:35.122241  123843 command_runner.go:130] > default_sysctls = [
	I0729 18:24:35.122248  123843 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 18:24:35.122251  123843 command_runner.go:130] > ]
	I0729 18:24:35.122256  123843 command_runner.go:130] > # List of devices on the host that a
	I0729 18:24:35.122264  123843 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 18:24:35.122270  123843 command_runner.go:130] > # allowed_devices = [
	I0729 18:24:35.122273  123843 command_runner.go:130] > # 	"/dev/fuse",
	I0729 18:24:35.122278  123843 command_runner.go:130] > # ]
	I0729 18:24:35.122283  123843 command_runner.go:130] > # List of additional devices. specified as
	I0729 18:24:35.122296  123843 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 18:24:35.122304  123843 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 18:24:35.122311  123843 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 18:24:35.122317  123843 command_runner.go:130] > # additional_devices = [
	I0729 18:24:35.122320  123843 command_runner.go:130] > # ]
	I0729 18:24:35.122327  123843 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 18:24:35.122331  123843 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 18:24:35.122334  123843 command_runner.go:130] > # 	"/etc/cdi",
	I0729 18:24:35.122342  123843 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 18:24:35.122352  123843 command_runner.go:130] > # ]
	I0729 18:24:35.122364  123843 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 18:24:35.122376  123843 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 18:24:35.122384  123843 command_runner.go:130] > # Defaults to false.
	I0729 18:24:35.122394  123843 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 18:24:35.122406  123843 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 18:24:35.122417  123843 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 18:24:35.122426  123843 command_runner.go:130] > # hooks_dir = [
	I0729 18:24:35.122435  123843 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 18:24:35.122441  123843 command_runner.go:130] > # ]
	I0729 18:24:35.122446  123843 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 18:24:35.122454  123843 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 18:24:35.122461  123843 command_runner.go:130] > # its default mounts from the following two files:
	I0729 18:24:35.122465  123843 command_runner.go:130] > #
	I0729 18:24:35.122471  123843 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 18:24:35.122479  123843 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 18:24:35.122484  123843 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 18:24:35.122489  123843 command_runner.go:130] > #
	I0729 18:24:35.122494  123843 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 18:24:35.122503  123843 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 18:24:35.122509  123843 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 18:24:35.122515  123843 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 18:24:35.122519  123843 command_runner.go:130] > #
	I0729 18:24:35.122522  123843 command_runner.go:130] > # default_mounts_file = ""
	I0729 18:24:35.122530  123843 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 18:24:35.122539  123843 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 18:24:35.122544  123843 command_runner.go:130] > pids_limit = 1024
	I0729 18:24:35.122555  123843 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 18:24:35.122562  123843 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 18:24:35.122568  123843 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 18:24:35.122578  123843 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 18:24:35.122583  123843 command_runner.go:130] > # log_size_max = -1
	I0729 18:24:35.122590  123843 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 18:24:35.122599  123843 command_runner.go:130] > # log_to_journald = false
	I0729 18:24:35.122607  123843 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 18:24:35.122612  123843 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 18:24:35.122626  123843 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 18:24:35.122633  123843 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 18:24:35.122638  123843 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 18:24:35.122644  123843 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 18:24:35.122649  123843 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 18:24:35.122654  123843 command_runner.go:130] > # read_only = false
	I0729 18:24:35.122659  123843 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 18:24:35.122667  123843 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 18:24:35.122674  123843 command_runner.go:130] > # live configuration reload.
	I0729 18:24:35.122677  123843 command_runner.go:130] > # log_level = "info"
	I0729 18:24:35.122685  123843 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 18:24:35.122689  123843 command_runner.go:130] > # This option supports live configuration reload.
	I0729 18:24:35.122695  123843 command_runner.go:130] > # log_filter = ""
	I0729 18:24:35.122701  123843 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 18:24:35.122710  123843 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 18:24:35.122717  123843 command_runner.go:130] > # separated by comma.
	I0729 18:24:35.122724  123843 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 18:24:35.122730  123843 command_runner.go:130] > # uid_mappings = ""
	I0729 18:24:35.122736  123843 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 18:24:35.122743  123843 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 18:24:35.122749  123843 command_runner.go:130] > # separated by comma.
	I0729 18:24:35.122756  123843 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 18:24:35.122762  123843 command_runner.go:130] > # gid_mappings = ""
	I0729 18:24:35.122768  123843 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 18:24:35.122776  123843 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 18:24:35.122782  123843 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 18:24:35.122791  123843 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 18:24:35.122797  123843 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 18:24:35.122804  123843 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 18:24:35.122812  123843 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 18:24:35.122818  123843 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 18:24:35.122827  123843 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 18:24:35.122835  123843 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 18:24:35.122841  123843 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 18:24:35.122849  123843 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 18:24:35.122861  123843 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 18:24:35.122872  123843 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 18:24:35.122880  123843 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 18:24:35.122888  123843 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 18:24:35.122893  123843 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 18:24:35.122900  123843 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 18:24:35.122904  123843 command_runner.go:130] > drop_infra_ctr = false
	I0729 18:24:35.122912  123843 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 18:24:35.122920  123843 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 18:24:35.122927  123843 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 18:24:35.122933  123843 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 18:24:35.122940  123843 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 18:24:35.122948  123843 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 18:24:35.122955  123843 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 18:24:35.122960  123843 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 18:24:35.122966  123843 command_runner.go:130] > # shared_cpuset = ""
	I0729 18:24:35.122971  123843 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 18:24:35.122976  123843 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 18:24:35.122982  123843 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 18:24:35.122988  123843 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 18:24:35.122995  123843 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 18:24:35.123000  123843 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 18:24:35.123008  123843 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 18:24:35.123012  123843 command_runner.go:130] > # enable_criu_support = false
	I0729 18:24:35.123017  123843 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 18:24:35.123025  123843 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 18:24:35.123031  123843 command_runner.go:130] > # enable_pod_events = false
	I0729 18:24:35.123037  123843 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 18:24:35.123045  123843 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 18:24:35.123052  123843 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 18:24:35.123056  123843 command_runner.go:130] > # default_runtime = "runc"
	I0729 18:24:35.123061  123843 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 18:24:35.123071  123843 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 18:24:35.123080  123843 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 18:24:35.123090  123843 command_runner.go:130] > # creation as a file is not desired either.
	I0729 18:24:35.123099  123843 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 18:24:35.123106  123843 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 18:24:35.123118  123843 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 18:24:35.123123  123843 command_runner.go:130] > # ]
	I0729 18:24:35.123129  123843 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 18:24:35.123138  123843 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 18:24:35.123145  123843 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 18:24:35.123150  123843 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 18:24:35.123155  123843 command_runner.go:130] > #
	I0729 18:24:35.123159  123843 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 18:24:35.123166  123843 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 18:24:35.123212  123843 command_runner.go:130] > # runtime_type = "oci"
	I0729 18:24:35.123219  123843 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 18:24:35.123224  123843 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 18:24:35.123228  123843 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 18:24:35.123232  123843 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 18:24:35.123236  123843 command_runner.go:130] > # monitor_env = []
	I0729 18:24:35.123240  123843 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 18:24:35.123247  123843 command_runner.go:130] > # allowed_annotations = []
	I0729 18:24:35.123253  123843 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 18:24:35.123259  123843 command_runner.go:130] > # Where:
	I0729 18:24:35.123264  123843 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 18:24:35.123272  123843 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 18:24:35.123279  123843 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 18:24:35.123287  123843 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 18:24:35.123292  123843 command_runner.go:130] > #   in $PATH.
	I0729 18:24:35.123299  123843 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 18:24:35.123305  123843 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 18:24:35.123311  123843 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 18:24:35.123317  123843 command_runner.go:130] > #   state.
	I0729 18:24:35.123322  123843 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 18:24:35.123335  123843 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 18:24:35.123346  123843 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 18:24:35.123357  123843 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 18:24:35.123368  123843 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 18:24:35.123380  123843 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 18:24:35.123393  123843 command_runner.go:130] > #   The currently recognized values are:
	I0729 18:24:35.123406  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 18:24:35.123426  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 18:24:35.123438  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 18:24:35.123449  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 18:24:35.123463  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 18:24:35.123472  123843 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 18:24:35.123481  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 18:24:35.123487  123843 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 18:24:35.123495  123843 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 18:24:35.123501  123843 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 18:24:35.123507  123843 command_runner.go:130] > #   deprecated option "conmon".
	I0729 18:24:35.123514  123843 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 18:24:35.123521  123843 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 18:24:35.123527  123843 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 18:24:35.123534  123843 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 18:24:35.123540  123843 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 18:24:35.123547  123843 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 18:24:35.123558  123843 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 18:24:35.123569  123843 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 18:24:35.123573  123843 command_runner.go:130] > #
	I0729 18:24:35.123578  123843 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 18:24:35.123583  123843 command_runner.go:130] > #
	I0729 18:24:35.123588  123843 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 18:24:35.123596  123843 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 18:24:35.123600  123843 command_runner.go:130] > #
	I0729 18:24:35.123606  123843 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 18:24:35.123613  123843 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 18:24:35.123621  123843 command_runner.go:130] > #
	I0729 18:24:35.123627  123843 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 18:24:35.123633  123843 command_runner.go:130] > # feature.
	I0729 18:24:35.123636  123843 command_runner.go:130] > #
	I0729 18:24:35.123642  123843 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 18:24:35.123650  123843 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 18:24:35.123656  123843 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 18:24:35.123673  123843 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 18:24:35.123681  123843 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 18:24:35.123686  123843 command_runner.go:130] > #
	I0729 18:24:35.123696  123843 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 18:24:35.123705  123843 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 18:24:35.123710  123843 command_runner.go:130] > #
	I0729 18:24:35.123716  123843 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 18:24:35.123724  123843 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 18:24:35.123728  123843 command_runner.go:130] > #
	I0729 18:24:35.123734  123843 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 18:24:35.123743  123843 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 18:24:35.123747  123843 command_runner.go:130] > # limitation.
	I0729 18:24:35.123752  123843 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 18:24:35.123759  123843 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 18:24:35.123763  123843 command_runner.go:130] > runtime_type = "oci"
	I0729 18:24:35.123769  123843 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 18:24:35.123773  123843 command_runner.go:130] > runtime_config_path = ""
	I0729 18:24:35.123780  123843 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 18:24:35.123784  123843 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 18:24:35.123790  123843 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 18:24:35.123794  123843 command_runner.go:130] > monitor_env = [
	I0729 18:24:35.123803  123843 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 18:24:35.123808  123843 command_runner.go:130] > ]
	I0729 18:24:35.123812  123843 command_runner.go:130] > privileged_without_host_devices = false
	I0729 18:24:35.123818  123843 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 18:24:35.123826  123843 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 18:24:35.123831  123843 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 18:24:35.123841  123843 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 18:24:35.123850  123843 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 18:24:35.123857  123843 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 18:24:35.123866  123843 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 18:24:35.123875  123843 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 18:24:35.123880  123843 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 18:24:35.123886  123843 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 18:24:35.123890  123843 command_runner.go:130] > # Example:
	I0729 18:24:35.123894  123843 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 18:24:35.123898  123843 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 18:24:35.123905  123843 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 18:24:35.123909  123843 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 18:24:35.123918  123843 command_runner.go:130] > # cpuset = 0
	I0729 18:24:35.123923  123843 command_runner.go:130] > # cpushares = "0-1"
	I0729 18:24:35.123926  123843 command_runner.go:130] > # Where:
	I0729 18:24:35.123930  123843 command_runner.go:130] > # The workload name is workload-type.
	I0729 18:24:35.123936  123843 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 18:24:35.123941  123843 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 18:24:35.123946  123843 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 18:24:35.123959  123843 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 18:24:35.123964  123843 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 18:24:35.123969  123843 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 18:24:35.123975  123843 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 18:24:35.123978  123843 command_runner.go:130] > # Default value is set to true
	I0729 18:24:35.123982  123843 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 18:24:35.123989  123843 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 18:24:35.123996  123843 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 18:24:35.124000  123843 command_runner.go:130] > # Default value is set to 'false'
	I0729 18:24:35.124006  123843 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 18:24:35.124012  123843 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 18:24:35.124017  123843 command_runner.go:130] > #
	I0729 18:24:35.124023  123843 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 18:24:35.124030  123843 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 18:24:35.124036  123843 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 18:24:35.124044  123843 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 18:24:35.124050  123843 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 18:24:35.124056  123843 command_runner.go:130] > [crio.image]
	I0729 18:24:35.124061  123843 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 18:24:35.124067  123843 command_runner.go:130] > # default_transport = "docker://"
	I0729 18:24:35.124078  123843 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 18:24:35.124086  123843 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 18:24:35.124092  123843 command_runner.go:130] > # global_auth_file = ""
	I0729 18:24:35.124097  123843 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 18:24:35.124103  123843 command_runner.go:130] > # This option supports live configuration reload.
	I0729 18:24:35.124108  123843 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 18:24:35.124116  123843 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 18:24:35.124126  123843 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 18:24:35.124131  123843 command_runner.go:130] > # This option supports live configuration reload.
	I0729 18:24:35.124242  123843 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 18:24:35.124391  123843 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 18:24:35.124410  123843 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 18:24:35.124419  123843 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 18:24:35.124433  123843 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 18:24:35.124440  123843 command_runner.go:130] > # pause_command = "/pause"
	I0729 18:24:35.124449  123843 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 18:24:35.124462  123843 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 18:24:35.124475  123843 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 18:24:35.124489  123843 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 18:24:35.124509  123843 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 18:24:35.124523  123843 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 18:24:35.124528  123843 command_runner.go:130] > # pinned_images = [
	I0729 18:24:35.124533  123843 command_runner.go:130] > # ]
	I0729 18:24:35.124552  123843 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 18:24:35.124561  123843 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 18:24:35.124576  123843 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 18:24:35.124585  123843 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 18:24:35.124633  123843 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 18:24:35.124684  123843 command_runner.go:130] > # signature_policy = ""
	I0729 18:24:35.124695  123843 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 18:24:35.124718  123843 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 18:24:35.124734  123843 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 18:24:35.124744  123843 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 18:24:35.124757  123843 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 18:24:35.124768  123843 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 18:24:35.124778  123843 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 18:24:35.124793  123843 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 18:24:35.124799  123843 command_runner.go:130] > # changing them here.
	I0729 18:24:35.124808  123843 command_runner.go:130] > # insecure_registries = [
	I0729 18:24:35.124813  123843 command_runner.go:130] > # ]
	I0729 18:24:35.124827  123843 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 18:24:35.124835  123843 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 18:24:35.124841  123843 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 18:24:35.124849  123843 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 18:24:35.124875  123843 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 18:24:35.124890  123843 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 18:24:35.124895  123843 command_runner.go:130] > # CNI plugins.
	I0729 18:24:35.124901  123843 command_runner.go:130] > [crio.network]
	I0729 18:24:35.124914  123843 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 18:24:35.124923  123843 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 18:24:35.124929  123843 command_runner.go:130] > # cni_default_network = ""
	I0729 18:24:35.124937  123843 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 18:24:35.124948  123843 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 18:24:35.124956  123843 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 18:24:35.124961  123843 command_runner.go:130] > # plugin_dirs = [
	I0729 18:24:35.124968  123843 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 18:24:35.124972  123843 command_runner.go:130] > # ]
	I0729 18:24:35.124986  123843 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 18:24:35.124992  123843 command_runner.go:130] > [crio.metrics]
	I0729 18:24:35.124999  123843 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 18:24:35.125005  123843 command_runner.go:130] > enable_metrics = true
	I0729 18:24:35.125017  123843 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 18:24:35.125024  123843 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 18:24:35.125040  123843 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 18:24:35.125055  123843 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 18:24:35.125063  123843 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 18:24:35.125069  123843 command_runner.go:130] > # metrics_collectors = [
	I0729 18:24:35.125075  123843 command_runner.go:130] > # 	"operations",
	I0729 18:24:35.125084  123843 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 18:24:35.125095  123843 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 18:24:35.125101  123843 command_runner.go:130] > # 	"operations_errors",
	I0729 18:24:35.125107  123843 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 18:24:35.125113  123843 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 18:24:35.125125  123843 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 18:24:35.125137  123843 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 18:24:35.125143  123843 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 18:24:35.125150  123843 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 18:24:35.125156  123843 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 18:24:35.125163  123843 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 18:24:35.125174  123843 command_runner.go:130] > # 	"containers_oom_total",
	I0729 18:24:35.125181  123843 command_runner.go:130] > # 	"containers_oom",
	I0729 18:24:35.125188  123843 command_runner.go:130] > # 	"processes_defunct",
	I0729 18:24:35.125193  123843 command_runner.go:130] > # 	"operations_total",
	I0729 18:24:35.125200  123843 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 18:24:35.125213  123843 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 18:24:35.125219  123843 command_runner.go:130] > # 	"operations_errors_total",
	I0729 18:24:35.125226  123843 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 18:24:35.125232  123843 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 18:24:35.125239  123843 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 18:24:35.125251  123843 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 18:24:35.125261  123843 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 18:24:35.125268  123843 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 18:24:35.125275  123843 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 18:24:35.125287  123843 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 18:24:35.125292  123843 command_runner.go:130] > # ]
	I0729 18:24:35.125300  123843 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 18:24:35.125307  123843 command_runner.go:130] > # metrics_port = 9090
	I0729 18:24:35.125314  123843 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 18:24:35.125325  123843 command_runner.go:130] > # metrics_socket = ""
	I0729 18:24:35.125338  123843 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 18:24:35.125348  123843 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 18:24:35.125362  123843 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 18:24:35.125369  123843 command_runner.go:130] > # certificate on any modification event.
	I0729 18:24:35.125375  123843 command_runner.go:130] > # metrics_cert = ""
	I0729 18:24:35.125382  123843 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 18:24:35.125394  123843 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 18:24:35.125400  123843 command_runner.go:130] > # metrics_key = ""
	I0729 18:24:35.125409  123843 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 18:24:35.125414  123843 command_runner.go:130] > [crio.tracing]
	I0729 18:24:35.125428  123843 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 18:24:35.125435  123843 command_runner.go:130] > # enable_tracing = false
	I0729 18:24:35.125443  123843 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 18:24:35.125450  123843 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 18:24:35.125469  123843 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 18:24:35.125476  123843 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 18:24:35.125482  123843 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 18:24:35.125488  123843 command_runner.go:130] > [crio.nri]
	I0729 18:24:35.125513  123843 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 18:24:35.125519  123843 command_runner.go:130] > # enable_nri = false
	I0729 18:24:35.125526  123843 command_runner.go:130] > # NRI socket to listen on.
	I0729 18:24:35.125533  123843 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 18:24:35.125539  123843 command_runner.go:130] > # NRI plugin directory to use.
	I0729 18:24:35.125551  123843 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 18:24:35.125558  123843 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 18:24:35.125566  123843 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 18:24:35.125574  123843 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 18:24:35.125585  123843 command_runner.go:130] > # nri_disable_connections = false
	I0729 18:24:35.125645  123843 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 18:24:35.125681  123843 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 18:24:35.125691  123843 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 18:24:35.125704  123843 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 18:24:35.125725  123843 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 18:24:35.125735  123843 command_runner.go:130] > [crio.stats]
	I0729 18:24:35.125757  123843 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 18:24:35.125776  123843 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 18:24:35.125789  123843 command_runner.go:130] > # stats_collection_period = 0
	I0729 18:24:35.126194  123843 cni.go:84] Creating CNI manager for ""
	I0729 18:24:35.126206  123843 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 18:24:35.126216  123843 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:24:35.126238  123843 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-976328 NodeName:multinode-976328 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:24:35.126369  123843 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-976328"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:24:35.126434  123843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:24:35.136673  123843 command_runner.go:130] > kubeadm
	I0729 18:24:35.136689  123843 command_runner.go:130] > kubectl
	I0729 18:24:35.136694  123843 command_runner.go:130] > kubelet
	I0729 18:24:35.136718  123843 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:24:35.136779  123843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:24:35.146215  123843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 18:24:35.162281  123843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:24:35.178642  123843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 18:24:35.194838  123843 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0729 18:24:35.198434  123843 command_runner.go:130] > 192.168.39.211	control-plane.minikube.internal
	I0729 18:24:35.198575  123843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:24:35.338496  123843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:24:35.353476  123843 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328 for IP: 192.168.39.211
	I0729 18:24:35.353502  123843 certs.go:194] generating shared ca certs ...
	I0729 18:24:35.353521  123843 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:24:35.353706  123843 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 18:24:35.353772  123843 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 18:24:35.353786  123843 certs.go:256] generating profile certs ...
	I0729 18:24:35.353885  123843 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/client.key
	I0729 18:24:35.353958  123843 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/apiserver.key.21ce94e8
	I0729 18:24:35.354020  123843 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/proxy-client.key
	I0729 18:24:35.354034  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:24:35.354049  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:24:35.354067  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:24:35.354085  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:24:35.354101  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:24:35.354120  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:24:35.354134  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:24:35.354151  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:24:35.354219  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 18:24:35.354260  123843 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 18:24:35.354274  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:24:35.354306  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:24:35.354337  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:24:35.354367  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 18:24:35.354416  123843 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:24:35.354459  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem -> /usr/share/ca-certificates/95282.pem
	I0729 18:24:35.354476  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> /usr/share/ca-certificates/952822.pem
	I0729 18:24:35.354491  123843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:24:35.355285  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:24:35.379278  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:24:35.402310  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:24:35.425849  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:24:35.448835  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:24:35.472586  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:24:35.495906  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:24:35.518833  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/multinode-976328/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 18:24:35.541346  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 18:24:35.576993  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 18:24:35.671312  123843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:24:35.736086  123843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:24:35.777952  123843 ssh_runner.go:195] Run: openssl version
	I0729 18:24:35.806550  123843 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 18:24:35.811701  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 18:24:35.827448  123843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 18:24:35.835446  123843 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:24:35.835477  123843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:24:35.835520  123843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 18:24:35.863796  123843 command_runner.go:130] > 3ec20f2e
	I0729 18:24:35.863905  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:24:35.879826  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:24:35.896285  123843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:24:35.902246  123843 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:24:35.904643  123843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:24:35.904703  123843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:24:35.921269  123843 command_runner.go:130] > b5213941
	I0729 18:24:35.921379  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:24:35.937651  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 18:24:35.954984  123843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 18:24:35.964220  123843 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:24:35.964706  123843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:24:35.964761  123843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 18:24:35.977113  123843 command_runner.go:130] > 51391683
	I0729 18:24:35.977212  123843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 18:24:36.003133  123843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:24:36.024906  123843 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:24:36.024936  123843 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 18:24:36.024944  123843 command_runner.go:130] > Device: 253,1	Inode: 6292011     Links: 1
	I0729 18:24:36.024952  123843 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 18:24:36.024961  123843 command_runner.go:130] > Access: 2024-07-29 18:17:46.106282779 +0000
	I0729 18:24:36.024968  123843 command_runner.go:130] > Modify: 2024-07-29 18:17:46.106282779 +0000
	I0729 18:24:36.024975  123843 command_runner.go:130] > Change: 2024-07-29 18:17:46.106282779 +0000
	I0729 18:24:36.024982  123843 command_runner.go:130] >  Birth: 2024-07-29 18:17:46.106282779 +0000
	I0729 18:24:36.025044  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:24:36.034421  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.034610  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:24:36.046110  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.046331  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:24:36.061864  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.062069  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:24:36.078654  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.078926  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:24:36.084908  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.085129  123843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:24:36.090925  123843 command_runner.go:130] > Certificate will not expire
	I0729 18:24:36.091183  123843 kubeadm.go:392] StartCluster: {Name:multinode-976328 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-976328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.157 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.144 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:24:36.091337  123843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:24:36.091398  123843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:24:36.155901  123843 command_runner.go:130] > 71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9
	I0729 18:24:36.155933  123843 command_runner.go:130] > 380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56
	I0729 18:24:36.155943  123843 command_runner.go:130] > 8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed
	I0729 18:24:36.155955  123843 command_runner.go:130] > ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052
	I0729 18:24:36.155964  123843 command_runner.go:130] > fc72e5cd2f6959f4a5c3767fd52eb35adddd720c79581453e188841b8961736d
	I0729 18:24:36.155971  123843 command_runner.go:130] > 1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f
	I0729 18:24:36.155980  123843 command_runner.go:130] > fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac
	I0729 18:24:36.155998  123843 command_runner.go:130] > 220c67ac7bb003b3f5eb10ef9500671e3f6242855a58efc5750688b8faa63850
	I0729 18:24:36.156011  123843 command_runner.go:130] > 551d37c89df791c8d7c7ced8d5c57332a6b4a2783a737d5dbdd75763e5784414
	I0729 18:24:36.156019  123843 command_runner.go:130] > 3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9
	I0729 18:24:36.156028  123843 command_runner.go:130] > 2927818faccc0686b610f0146bcd8c41985710fdcaa02ee5353cc058348cdf6a
	I0729 18:24:36.156061  123843 cri.go:89] found id: "71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9"
	I0729 18:24:36.156074  123843 cri.go:89] found id: "380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56"
	I0729 18:24:36.156079  123843 cri.go:89] found id: "8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed"
	I0729 18:24:36.156084  123843 cri.go:89] found id: "ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052"
	I0729 18:24:36.156093  123843 cri.go:89] found id: "fc72e5cd2f6959f4a5c3767fd52eb35adddd720c79581453e188841b8961736d"
	I0729 18:24:36.156097  123843 cri.go:89] found id: "1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f"
	I0729 18:24:36.156102  123843 cri.go:89] found id: "fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac"
	I0729 18:24:36.156109  123843 cri.go:89] found id: "220c67ac7bb003b3f5eb10ef9500671e3f6242855a58efc5750688b8faa63850"
	I0729 18:24:36.156114  123843 cri.go:89] found id: "551d37c89df791c8d7c7ced8d5c57332a6b4a2783a737d5dbdd75763e5784414"
	I0729 18:24:36.156126  123843 cri.go:89] found id: "3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9"
	I0729 18:24:36.156134  123843 cri.go:89] found id: "2927818faccc0686b610f0146bcd8c41985710fdcaa02ee5353cc058348cdf6a"
	I0729 18:24:36.156139  123843 cri.go:89] found id: ""
	I0729 18:24:36.156201  123843 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.016380405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277733014926946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fef417df-19d2-48d0-88bb-8e5b8830fafd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.019316350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=757a7ec5-d444-4b73-a648-b819211693b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.019389239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=757a7ec5-d444-4b73-a648-b819211693b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.019735799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd4bedb03eccdac261a791239bb1da575e1e9ef2a04f1e29ab0d460d98a719a3,PodSandboxId:27edcb9cac743e5e25ce7c44c3a05aab42481e0c908e3be763040d245eefeca5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722277509544894990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942abb259c7e41ee6bcc94c52829c2230867d3047c11119053032f3fc5a82fbf,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277493577273802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container.hash: 84b891e1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192bdf369e557b0185eaaf55a2b84baee3b192d72ca04820b103f49a17302a92,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277489766213141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9873fe03dfd66dac4612fc75aad173260796756b67a32a4e0c38b2ca71fba9c,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277489769215189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99890209de3349ae1acf10d288f7150011e6631a63614be3a0c65a1939bd9b6e,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277489759399643,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2157c2885301b6ac6a8e148e42e6f0b9ef4d92b772885763166f26ffd267cf4c,PodSandboxId:85b2e4245c414d7945daa446a343cbc420696c62d526d2a94e4ba24f48f6efae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277486452720732,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:166885d3e009fa3e0be8b5922d734ccb35202f737b5efbd4ececb4b8ac56acec,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277486396048984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container
.hash: 84b891e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9604b38a357c90bcf05ce45f13f6a4fb7d4dc430b9394b7eb8154f86776bd074,PodSandboxId:e454bd2f9f0a2501d9c2d45b8e358a024438d4f5d9b9567ddf9e408deeabaaa6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722277484565797033,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.cont
ainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01c9ad4df1fab0ab613b4207329e5ba2139310e3edf3d01a207fd27a46a48df,PodSandboxId:fe6973ba344afaa84337f9cbdc74a64a090277c02b5c50ca14ac71a04f91f1a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277483332093931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54847580765e8923b115405b28cd71d12c1fe0b6a89f33710e310d7720dc35bb,PodSandboxId:16e2eb1765c611ea056238b46c0f275c8501d96377c8c865393df5743bcbc044,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277481356730172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722277475804732209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722277475761559165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722277475742515222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad7ab677d3311a89174206ae528f753ea5439656ab7db7cad86b4685066b7465,PodSandboxId:cc5d72b3c3274f25f18b24ce04d4db8a40467c9b039ad699870a2444b538dce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722277156836871571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052,PodSandboxId:e3756b7a777ec337e45a3be46d6644245b5cbdcab43bb99a73fbab59237098f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722277102936035309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f,PodSandboxId:838c7abd5f0e6ea85cb6374de70e5372923e2e8b7c49a0e36552fed0a5dd68a8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722277090774863426,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663
032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac,PodSandboxId:d1d827567ad4e3c5fa168c044f54ff6a6363a7abfc8dfeecfa4c1f95dcc69fb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722277088426604466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9,PodSandboxId:196c65306bd33d78cee65d65848a13eca37b837a106f9be20bfeae8170a0b9bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722277069074575446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=757a7ec5-d444-4b73-a648-b819211693b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.059370070Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5a4b56c-af72-4718-9277-fab0ba028b4d name=/runtime.v1.RuntimeService/Version
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.059439193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5a4b56c-af72-4718-9277-fab0ba028b4d name=/runtime.v1.RuntimeService/Version
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.060543423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd05db08-cd1c-41b4-97bd-3cfb71e40eb6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.060999872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277733060978931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd05db08-cd1c-41b4-97bd-3cfb71e40eb6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.061800829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7418975-0ff4-460b-9b1b-6549ae9d8453 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.061875942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7418975-0ff4-460b-9b1b-6549ae9d8453 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.062253695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd4bedb03eccdac261a791239bb1da575e1e9ef2a04f1e29ab0d460d98a719a3,PodSandboxId:27edcb9cac743e5e25ce7c44c3a05aab42481e0c908e3be763040d245eefeca5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722277509544894990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942abb259c7e41ee6bcc94c52829c2230867d3047c11119053032f3fc5a82fbf,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277493577273802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container.hash: 84b891e1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192bdf369e557b0185eaaf55a2b84baee3b192d72ca04820b103f49a17302a92,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277489766213141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9873fe03dfd66dac4612fc75aad173260796756b67a32a4e0c38b2ca71fba9c,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277489769215189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99890209de3349ae1acf10d288f7150011e6631a63614be3a0c65a1939bd9b6e,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277489759399643,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2157c2885301b6ac6a8e148e42e6f0b9ef4d92b772885763166f26ffd267cf4c,PodSandboxId:85b2e4245c414d7945daa446a343cbc420696c62d526d2a94e4ba24f48f6efae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277486452720732,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:166885d3e009fa3e0be8b5922d734ccb35202f737b5efbd4ececb4b8ac56acec,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277486396048984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container
.hash: 84b891e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9604b38a357c90bcf05ce45f13f6a4fb7d4dc430b9394b7eb8154f86776bd074,PodSandboxId:e454bd2f9f0a2501d9c2d45b8e358a024438d4f5d9b9567ddf9e408deeabaaa6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722277484565797033,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.cont
ainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01c9ad4df1fab0ab613b4207329e5ba2139310e3edf3d01a207fd27a46a48df,PodSandboxId:fe6973ba344afaa84337f9cbdc74a64a090277c02b5c50ca14ac71a04f91f1a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277483332093931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54847580765e8923b115405b28cd71d12c1fe0b6a89f33710e310d7720dc35bb,PodSandboxId:16e2eb1765c611ea056238b46c0f275c8501d96377c8c865393df5743bcbc044,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277481356730172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722277475804732209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722277475761559165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722277475742515222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad7ab677d3311a89174206ae528f753ea5439656ab7db7cad86b4685066b7465,PodSandboxId:cc5d72b3c3274f25f18b24ce04d4db8a40467c9b039ad699870a2444b538dce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722277156836871571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052,PodSandboxId:e3756b7a777ec337e45a3be46d6644245b5cbdcab43bb99a73fbab59237098f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722277102936035309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f,PodSandboxId:838c7abd5f0e6ea85cb6374de70e5372923e2e8b7c49a0e36552fed0a5dd68a8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722277090774863426,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663
032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac,PodSandboxId:d1d827567ad4e3c5fa168c044f54ff6a6363a7abfc8dfeecfa4c1f95dcc69fb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722277088426604466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9,PodSandboxId:196c65306bd33d78cee65d65848a13eca37b837a106f9be20bfeae8170a0b9bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722277069074575446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7418975-0ff4-460b-9b1b-6549ae9d8453 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.107391506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3d567d3-f773-4aac-ac47-cc22a7e4a6a9 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.107460089Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3d567d3-f773-4aac-ac47-cc22a7e4a6a9 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.108895186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=236d8647-90e0-4d83-be93-1f9fcf16d309 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.109392484Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277733109367394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=236d8647-90e0-4d83-be93-1f9fcf16d309 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.110030766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dc62d04-ea80-4745-92be-85ad0f40c35a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.110108290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dc62d04-ea80-4745-92be-85ad0f40c35a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.110493842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd4bedb03eccdac261a791239bb1da575e1e9ef2a04f1e29ab0d460d98a719a3,PodSandboxId:27edcb9cac743e5e25ce7c44c3a05aab42481e0c908e3be763040d245eefeca5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722277509544894990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942abb259c7e41ee6bcc94c52829c2230867d3047c11119053032f3fc5a82fbf,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277493577273802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container.hash: 84b891e1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192bdf369e557b0185eaaf55a2b84baee3b192d72ca04820b103f49a17302a92,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277489766213141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9873fe03dfd66dac4612fc75aad173260796756b67a32a4e0c38b2ca71fba9c,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277489769215189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99890209de3349ae1acf10d288f7150011e6631a63614be3a0c65a1939bd9b6e,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277489759399643,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2157c2885301b6ac6a8e148e42e6f0b9ef4d92b772885763166f26ffd267cf4c,PodSandboxId:85b2e4245c414d7945daa446a343cbc420696c62d526d2a94e4ba24f48f6efae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277486452720732,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:166885d3e009fa3e0be8b5922d734ccb35202f737b5efbd4ececb4b8ac56acec,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277486396048984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container
.hash: 84b891e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9604b38a357c90bcf05ce45f13f6a4fb7d4dc430b9394b7eb8154f86776bd074,PodSandboxId:e454bd2f9f0a2501d9c2d45b8e358a024438d4f5d9b9567ddf9e408deeabaaa6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722277484565797033,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.cont
ainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01c9ad4df1fab0ab613b4207329e5ba2139310e3edf3d01a207fd27a46a48df,PodSandboxId:fe6973ba344afaa84337f9cbdc74a64a090277c02b5c50ca14ac71a04f91f1a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277483332093931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54847580765e8923b115405b28cd71d12c1fe0b6a89f33710e310d7720dc35bb,PodSandboxId:16e2eb1765c611ea056238b46c0f275c8501d96377c8c865393df5743bcbc044,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277481356730172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722277475804732209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722277475761559165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722277475742515222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad7ab677d3311a89174206ae528f753ea5439656ab7db7cad86b4685066b7465,PodSandboxId:cc5d72b3c3274f25f18b24ce04d4db8a40467c9b039ad699870a2444b538dce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722277156836871571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052,PodSandboxId:e3756b7a777ec337e45a3be46d6644245b5cbdcab43bb99a73fbab59237098f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722277102936035309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f,PodSandboxId:838c7abd5f0e6ea85cb6374de70e5372923e2e8b7c49a0e36552fed0a5dd68a8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722277090774863426,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663
032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac,PodSandboxId:d1d827567ad4e3c5fa168c044f54ff6a6363a7abfc8dfeecfa4c1f95dcc69fb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722277088426604466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9,PodSandboxId:196c65306bd33d78cee65d65848a13eca37b837a106f9be20bfeae8170a0b9bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722277069074575446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dc62d04-ea80-4745-92be-85ad0f40c35a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.153453328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67fe1f56-e3bd-4244-9d0d-b1adaec4b551 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.153536265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67fe1f56-e3bd-4244-9d0d-b1adaec4b551 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.154658080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0be638f5-9705-44ab-8386-7094d7f939de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.155057895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277733155038785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0be638f5-9705-44ab-8386-7094d7f939de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.155505619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=349ba097-2e16-42e5-8413-cce93926d1eb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.155577374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=349ba097-2e16-42e5-8413-cce93926d1eb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:28:53 multinode-976328 crio[2915]: time="2024-07-29 18:28:53.155925413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd4bedb03eccdac261a791239bb1da575e1e9ef2a04f1e29ab0d460d98a719a3,PodSandboxId:27edcb9cac743e5e25ce7c44c3a05aab42481e0c908e3be763040d245eefeca5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722277509544894990,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942abb259c7e41ee6bcc94c52829c2230867d3047c11119053032f3fc5a82fbf,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277493577273802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container.hash: 84b891e1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:192bdf369e557b0185eaaf55a2b84baee3b192d72ca04820b103f49a17302a92,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722277489766213141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9873fe03dfd66dac4612fc75aad173260796756b67a32a4e0c38b2ca71fba9c,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722277489769215189,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99890209de3349ae1acf10d288f7150011e6631a63614be3a0c65a1939bd9b6e,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722277489759399643,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2157c2885301b6ac6a8e148e42e6f0b9ef4d92b772885763166f26ffd267cf4c,PodSandboxId:85b2e4245c414d7945daa446a343cbc420696c62d526d2a94e4ba24f48f6efae,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277486452720732,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:166885d3e009fa3e0be8b5922d734ccb35202f737b5efbd4ececb4b8ac56acec,PodSandboxId:15d282b4a9515f7d0067c75017757bf5202b1e5ad45c366f20f3c7d426b86913,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722277486396048984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0e11b0-fc92-4d04-961e-d0888214b2b6,},Annotations:map[string]string{io.kubernetes.container
.hash: 84b891e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9604b38a357c90bcf05ce45f13f6a4fb7d4dc430b9394b7eb8154f86776bd074,PodSandboxId:e454bd2f9f0a2501d9c2d45b8e358a024438d4f5d9b9567ddf9e408deeabaaa6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722277484565797033,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.cont
ainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01c9ad4df1fab0ab613b4207329e5ba2139310e3edf3d01a207fd27a46a48df,PodSandboxId:fe6973ba344afaa84337f9cbdc74a64a090277c02b5c50ca14ac71a04f91f1a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277483332093931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54847580765e8923b115405b28cd71d12c1fe0b6a89f33710e310d7720dc35bb,PodSandboxId:16e2eb1765c611ea056238b46c0f275c8501d96377c8c865393df5743bcbc044,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722277481356730172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9,PodSandboxId:c08bff072868f0ad70742e4721c4de14cfaf52c22b05b9621618678c1e008025,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722277475804732209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4ee0dfa83d8a84f968bb69f76db985b,},Annotations:map[string]string{io.kubernetes.container.hash: a588f316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56,PodSandboxId:94083e46604e3ff6f03c10a1a5ffe4231e14a65565e55dbd405b0b5925aec1d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722277475761559165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e79d457a0c1c2c2d64935c1d26063957,},Annotations:map[string]string{io.kubernetes.container.hash: 691ce7cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed,PodSandboxId:26bae25767fd90c7e18df6ab9f66cbec92b15a087b59eaedd138f3ca74aa7a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722277475742515222,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0c3fce943009af839816b891ac22d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad7ab677d3311a89174206ae528f753ea5439656ab7db7cad86b4685066b7465,PodSandboxId:cc5d72b3c3274f25f18b24ce04d4db8a40467c9b039ad699870a2444b538dce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722277156836871571,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mdnj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01e64b3b-f8da-4ecf-8914-f0bcea794606,},Annotations:map[string]string{io.kubernetes.container.hash: 603901c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052,PodSandboxId:e3756b7a777ec337e45a3be46d6644245b5cbdcab43bb99a73fbab59237098f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722277102936035309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sls9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72421fc-93fc-42d7-8a68-93fe1f74686f,},Annotations:map[string]string{io.kubernetes.container.hash: 65eba02e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f,PodSandboxId:838c7abd5f0e6ea85cb6374de70e5372923e2e8b7c49a0e36552fed0a5dd68a8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722277090774863426,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ttmqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f226ace9-e1df-4171-bd7a-80c663
032a34,},Annotations:map[string]string{io.kubernetes.container.hash: 81b952b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac,PodSandboxId:d1d827567ad4e3c5fa168c044f54ff6a6363a7abfc8dfeecfa4c1f95dcc69fb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722277088426604466,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hqrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d116a5b3-2d88-4c19-862a-ce4e6100b5c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: f1e8a241,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9,PodSandboxId:196c65306bd33d78cee65d65848a13eca37b837a106f9be20bfeae8170a0b9bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722277069074575446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-976328,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9505709dfd9b02aeb696ed23f164e402,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=349ba097-2e16-42e5-8413-cce93926d1eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd4bedb03eccd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   27edcb9cac743       busybox-fc5497c4f-mdnj5
	942abb259c7e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   15d282b4a9515       storage-provisioner
	b9873fe03dfd6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            2                   94083e46604e3       kube-apiserver-multinode-976328
	192bdf369e557       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            2                   26bae25767fd9       kube-scheduler-multinode-976328
	99890209de334       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      2                   c08bff072868f       etcd-multinode-976328
	2157c2885301b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   85b2e4245c414       coredns-7db6d8ff4d-sls9j
	166885d3e009f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       1                   15d282b4a9515       storage-provisioner
	9604b38a357c9       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   e454bd2f9f0a2       kindnet-ttmqz
	d01c9ad4df1fa       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   fe6973ba344af       kube-proxy-5hqrk
	54847580765e8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   16e2eb1765c61       kube-controller-manager-multinode-976328
	71846f8a18b82       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Exited              etcd                      1                   c08bff072868f       etcd-multinode-976328
	380c57a942e9b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Exited              kube-apiserver            1                   94083e46604e3       kube-apiserver-multinode-976328
	8b1718df722bf       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Exited              kube-scheduler            1                   26bae25767fd9       kube-scheduler-multinode-976328
	ad7ab677d3311       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   cc5d72b3c3274       busybox-fc5497c4f-mdnj5
	ede7653ba82d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   e3756b7a777ec       coredns-7db6d8ff4d-sls9j
	1b584ffa95698       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   838c7abd5f0e6       kindnet-ttmqz
	fd327222d7f72       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   d1d827567ad4e       kube-proxy-5hqrk
	3b8f2b9512e35       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   196c65306bd33       kube-controller-manager-multinode-976328
	
	
	==> coredns [2157c2885301b6ac6a8e148e42e6f0b9ef4d92b772885763166f26ffd267cf4c] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41197 - 3790 "HINFO IN 1693272628972894029.8360814626276234203. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008625675s
	
	
	==> coredns [ede7653ba82d8013dc82cc34456497641549733349058713838d911322fe2052] <==
	[INFO] 10.244.1.2:33062 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004045534s
	[INFO] 10.244.1.2:56411 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098408s
	[INFO] 10.244.1.2:40888 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108015s
	[INFO] 10.244.1.2:60897 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001620298s
	[INFO] 10.244.1.2:37011 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063055s
	[INFO] 10.244.1.2:41176 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069437s
	[INFO] 10.244.1.2:34052 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087464s
	[INFO] 10.244.0.3:54166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132608s
	[INFO] 10.244.0.3:46094 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000043269s
	[INFO] 10.244.0.3:40883 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000036608s
	[INFO] 10.244.0.3:45269 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034177s
	[INFO] 10.244.1.2:57880 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103022s
	[INFO] 10.244.1.2:58599 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075324s
	[INFO] 10.244.1.2:33226 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063607s
	[INFO] 10.244.1.2:36852 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059955s
	[INFO] 10.244.0.3:42550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000214672s
	[INFO] 10.244.0.3:42550 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014319s
	[INFO] 10.244.0.3:33082 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000099951s
	[INFO] 10.244.0.3:37802 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092353s
	[INFO] 10.244.1.2:48413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000219781s
	[INFO] 10.244.1.2:54768 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105088s
	[INFO] 10.244.1.2:34397 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00019185s
	[INFO] 10.244.1.2:48793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089391s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-976328
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-976328
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=multinode-976328
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_17_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:17:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-976328
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:28:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:24:52 +0000   Mon, 29 Jul 2024 18:17:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:24:52 +0000   Mon, 29 Jul 2024 18:17:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:24:52 +0000   Mon, 29 Jul 2024 18:17:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:24:52 +0000   Mon, 29 Jul 2024 18:18:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    multinode-976328
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e41ff0df74d7477398733fc105040655
	  System UUID:                e41ff0df-74d7-4773-9873-3fc105040655
	  Boot ID:                    79341e3d-5dfe-46e4-808a-ad4755aae2e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mdnj5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 coredns-7db6d8ff4d-sls9j                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-976328                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-ttmqz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-976328             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-976328    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-5hqrk                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-976328             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node multinode-976328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node multinode-976328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node multinode-976328 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-976328 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-976328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-976328 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-976328 event: Registered Node multinode-976328 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-976328 status is now: NodeReady
	  Normal  Starting                 4m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node multinode-976328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node multinode-976328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node multinode-976328 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m48s                node-controller  Node multinode-976328 event: Registered Node multinode-976328 in Controller
	
	
	Name:               multinode-976328-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-976328-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=multinode-976328
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_25_33_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:25:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-976328-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:26:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 18:26:03 +0000   Mon, 29 Jul 2024 18:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 18:26:03 +0000   Mon, 29 Jul 2024 18:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 18:26:03 +0000   Mon, 29 Jul 2024 18:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 18:26:03 +0000   Mon, 29 Jul 2024 18:27:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    multinode-976328-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd9ef8f6b46f4ea5ac84757271819bbd
	  System UUID:                cd9ef8f6-b46f-4ea5-ac84-757271819bbd
	  Boot ID:                    02c53be0-542f-46d0-89f9-0a1a0168f13a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cvmvd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kindnet-bgn52              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m59s
	  kube-system                 kube-proxy-kj7zh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-976328-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-976328-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-976328-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m40s                  kubelet          Node multinode-976328-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m21s (x2 over 3m21s)  kubelet          Node multinode-976328-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x2 over 3m21s)  kubelet          Node multinode-976328-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x2 over 3m21s)  kubelet          Node multinode-976328-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m3s                   kubelet          Node multinode-976328-m02 status is now: NodeReady
	  Normal  NodeNotReady             108s                   node-controller  Node multinode-976328-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058313] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.175121] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.142401] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.264451] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.061507] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +3.800230] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.063793] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989440] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.077047] kauditd_printk_skb: 69 callbacks suppressed
	[Jul29 18:18] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.103491] systemd-fstab-generator[1509]: Ignoring "noauto" option for root device
	[ +14.437930] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 18:19] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 18:24] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.137021] systemd-fstab-generator[2846]: Ignoring "noauto" option for root device
	[  +0.178863] systemd-fstab-generator[2860]: Ignoring "noauto" option for root device
	[  +0.139088] systemd-fstab-generator[2872]: Ignoring "noauto" option for root device
	[  +0.278881] systemd-fstab-generator[2900]: Ignoring "noauto" option for root device
	[  +4.814543] systemd-fstab-generator[3009]: Ignoring "noauto" option for root device
	[  +0.082402] kauditd_printk_skb: 105 callbacks suppressed
	[  +6.004651] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.019245] kauditd_printk_skb: 20 callbacks suppressed
	[  +2.696005] systemd-fstab-generator[3864]: Ignoring "noauto" option for root device
	[  +3.760683] kauditd_printk_skb: 55 callbacks suppressed
	[Jul29 18:25] systemd-fstab-generator[4272]: Ignoring "noauto" option for root device
	
	
	==> etcd [71846f8a18b8218eaba74710cd9a2b74113bf1ddc0b85da0142bd2ff10d376e9] <==
	{"level":"info","ts":"2024-07-29T18:24:36.123307Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.06495ms"}
	{"level":"info","ts":"2024-07-29T18:24:36.15922Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T18:24:36.177027Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","commit-index":982}
	{"level":"info","ts":"2024-07-29T18:24:36.177205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T18:24:36.177262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became follower at term 2"}
	{"level":"info","ts":"2024-07-29T18:24:36.177274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d3f1da2044f49cdd [peers: [], term: 2, commit: 982, applied: 0, lastindex: 982, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T18:24:36.181592Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T18:24:36.207434Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":897}
	{"level":"info","ts":"2024-07-29T18:24:36.216949Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T18:24:36.222047Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d3f1da2044f49cdd","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:24:36.223119Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d3f1da2044f49cdd"}
	{"level":"info","ts":"2024-07-29T18:24:36.223264Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"d3f1da2044f49cdd","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T18:24:36.223481Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:36.223721Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:36.2238Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:36.224101Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T18:24:36.225101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd switched to configuration voters=(15272227643520752861)"}
	{"level":"info","ts":"2024-07-29T18:24:36.225361Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","added-peer-id":"d3f1da2044f49cdd","added-peer-peer-urls":["https://192.168.39.211:2380"]}
	{"level":"info","ts":"2024-07-29T18:24:36.225606Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:24:36.225717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:24:36.240678Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T18:24:36.240973Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d3f1da2044f49cdd","initial-advertise-peer-urls":["https://192.168.39.211:2380"],"listen-peer-urls":["https://192.168.39.211:2380"],"advertise-client-urls":["https://192.168.39.211:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.211:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T18:24:36.241032Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T18:24:36.241964Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.211:2380"}
	{"level":"info","ts":"2024-07-29T18:24:36.242014Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.211:2380"}
	
	
	==> etcd [99890209de3349ae1acf10d288f7150011e6631a63614be3a0c65a1939bd9b6e] <==
	{"level":"info","ts":"2024-07-29T18:24:50.03168Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:50.028088Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T18:24:50.032002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd switched to configuration voters=(15272227643520752861)"}
	{"level":"info","ts":"2024-07-29T18:24:50.032266Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.211:2380"}
	{"level":"info","ts":"2024-07-29T18:24:50.03336Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T18:24:50.035246Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:50.035491Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:24:50.035397Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","added-peer-id":"d3f1da2044f49cdd","added-peer-peer-urls":["https://192.168.39.211:2380"]}
	{"level":"info","ts":"2024-07-29T18:24:50.035658Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:24:50.035713Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:24:50.035434Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.211:2380"}
	{"level":"info","ts":"2024-07-29T18:24:51.095331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T18:24:51.095397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T18:24:51.095428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd received MsgPreVoteResp from d3f1da2044f49cdd at term 2"}
	{"level":"info","ts":"2024-07-29T18:24:51.09544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T18:24:51.095474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd received MsgVoteResp from d3f1da2044f49cdd at term 3"}
	{"level":"info","ts":"2024-07-29T18:24:51.095494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became leader at term 3"}
	{"level":"info","ts":"2024-07-29T18:24:51.095508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3f1da2044f49cdd elected leader d3f1da2044f49cdd at term 3"}
	{"level":"info","ts":"2024-07-29T18:24:51.100312Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d3f1da2044f49cdd","local-member-attributes":"{Name:multinode-976328 ClientURLs:[https://192.168.39.211:2379]}","request-path":"/0/members/d3f1da2044f49cdd/attributes","cluster-id":"a3f4522b5c780b58","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:24:51.10036Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:24:51.10095Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:24:51.102617Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.211:2379"}
	{"level":"info","ts":"2024-07-29T18:24:51.102691Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:24:51.103364Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:24:51.108332Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:28:53 up 11 min,  0 users,  load average: 0.14, 0.17, 0.10
	Linux multinode-976328 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1b584ffa95698b9cb0ec2f252ccbedd6d8c6c50da3e7cbf4707b2f007d97dc7f] <==
	I0729 18:22:11.746040       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:22:21.747611       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:22:21.747669       1 main.go:299] handling current node
	I0729 18:22:21.747685       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:22:21.747690       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:22:21.747859       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:22:21.747885       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:22:31.754126       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:22:31.754346       1 main.go:299] handling current node
	I0729 18:22:31.754382       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:22:31.754402       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:22:31.754552       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:22:31.754573       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:22:41.754826       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:22:41.754863       1 main.go:299] handling current node
	I0729 18:22:41.754901       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:22:41.754908       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:22:41.755017       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:22:41.755040       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:22:51.751017       1 main.go:295] Handling node with IPs: map[192.168.39.144:{}]
	I0729 18:22:51.751141       1 main.go:322] Node multinode-976328-m03 has CIDR [10.244.3.0/24] 
	I0729 18:22:51.751366       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:22:51.751411       1 main.go:299] handling current node
	I0729 18:22:51.751441       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:22:51.751458       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [9604b38a357c90bcf05ce45f13f6a4fb7d4dc430b9394b7eb8154f86776bd074] <==
	I0729 18:27:45.445851       1 main.go:299] handling current node
	I0729 18:27:55.444810       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:27:55.444996       1 main.go:299] handling current node
	I0729 18:27:55.445078       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:27:55.445104       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:28:05.453678       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:28:05.453852       1 main.go:299] handling current node
	I0729 18:28:05.453879       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:28:05.453897       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:28:15.450089       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:28:15.450297       1 main.go:299] handling current node
	I0729 18:28:15.450329       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:28:15.450352       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:28:25.446339       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:28:25.446535       1 main.go:299] handling current node
	I0729 18:28:25.446574       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:28:25.446595       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:28:35.454316       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:28:35.454368       1 main.go:299] handling current node
	I0729 18:28:35.454388       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:28:35.454404       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	I0729 18:28:45.445445       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0729 18:28:45.445512       1 main.go:299] handling current node
	I0729 18:28:45.445534       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0729 18:28:45.445540       1 main.go:322] Node multinode-976328-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [380c57a942e9b8e8da4aeec3c5e4acfb75abd6545a05b4844bcd31342dcd2d56] <==
	I0729 18:24:36.264701       1 options.go:221] external host was not specified, using 192.168.39.211
	I0729 18:24:36.271517       1 server.go:148] Version: v1.30.3
	I0729 18:24:36.271551       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0729 18:24:36.836219       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:36.836385       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 18:24:36.836460       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 18:24:36.844519       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:24:36.845785       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 18:24:36.845843       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 18:24:36.846053       1 instance.go:299] Using reconciler: lease
	W0729 18:24:36.852344       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:37.836913       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:37.837025       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:37.855076       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:39.271630       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:39.329860       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:39.346812       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:41.662752       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:42.035888       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:42.354126       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:46.008458       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 18:24:46.410713       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b9873fe03dfd66dac4612fc75aad173260796756b67a32a4e0c38b2ca71fba9c] <==
	I0729 18:24:52.532482       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 18:24:52.532901       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 18:24:52.533134       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 18:24:52.537533       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 18:24:52.548924       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 18:24:52.537569       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 18:24:52.549606       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 18:24:52.549994       1 aggregator.go:165] initial CRD sync complete...
	I0729 18:24:52.550752       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 18:24:52.550843       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 18:24:52.550868       1 cache.go:39] Caches are synced for autoregister controller
	I0729 18:24:52.551992       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0729 18:24:52.574827       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 18:24:53.444049       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 18:24:54.172647       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 18:24:54.290828       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 18:24:54.305892       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 18:24:54.369659       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 18:24:54.376877       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 18:25:05.752802       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 18:25:05.840046       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 18:26:09.649597       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0729 18:26:09.649787       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0729 18:26:09.651011       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0729 18:26:09.651120       1 timeout.go:142] post-timeout activity - time-elapsed: 1.631536ms, GET "/api/v1/services" result: <nil>
	
	
	==> kube-controller-manager [3b8f2b9512e354691ad883e7a1e671367650c4594f37b48f7919a80e82fe46c9] <==
	I0729 18:18:53.993446       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-976328-m02\" does not exist"
	I0729 18:18:54.070668       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-976328-m02" podCIDRs=["10.244.1.0/24"]
	I0729 18:18:57.169451       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-976328-m02"
	I0729 18:19:13.259406       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:19:15.563841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.898995ms"
	I0729 18:19:15.586110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.121072ms"
	I0729 18:19:15.599935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.550577ms"
	I0729 18:19:15.600051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.795µs"
	I0729 18:19:17.619417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.481893ms"
	I0729 18:19:17.619608       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.664µs"
	I0729 18:19:17.885327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.434412ms"
	I0729 18:19:17.886361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.702µs"
	I0729 18:19:48.178917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:19:48.179061       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-976328-m03\" does not exist"
	I0729 18:19:48.223503       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-976328-m03" podCIDRs=["10.244.2.0/24"]
	I0729 18:19:52.190570       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-976328-m03"
	I0729 18:20:06.992053       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:20:34.280747       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:20:35.378442       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:20:35.378538       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-976328-m03\" does not exist"
	I0729 18:20:35.392694       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-976328-m03" podCIDRs=["10.244.3.0/24"]
	I0729 18:20:53.136750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m03"
	I0729 18:21:37.245151       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:21:37.319893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.941452ms"
	I0729 18:21:37.321560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="202.224µs"
	
	
	==> kube-controller-manager [54847580765e8923b115405b28cd71d12c1fe0b6a89f33710e310d7720dc35bb] <==
	I0729 18:25:32.721848       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-976328-m02\" does not exist"
	I0729 18:25:32.742763       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-976328-m02" podCIDRs=["10.244.1.0/24"]
	I0729 18:25:34.636043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.515µs"
	I0729 18:25:34.651354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.177µs"
	I0729 18:25:34.659589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.842µs"
	I0729 18:25:34.675520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.787µs"
	I0729 18:25:34.687345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.426µs"
	I0729 18:25:34.692658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.977µs"
	I0729 18:25:50.470040       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:25:50.509004       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.409µs"
	I0729 18:25:50.521577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.874µs"
	I0729 18:25:51.956334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.555593ms"
	I0729 18:25:51.956566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.882µs"
	I0729 18:26:07.728294       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:26:08.812970       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-976328-m03\" does not exist"
	I0729 18:26:08.813137       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:26:08.823757       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-976328-m03" podCIDRs=["10.244.2.0/24"]
	I0729 18:26:26.543547       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m03"
	I0729 18:26:31.936008       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-976328-m02"
	I0729 18:27:05.887516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.924403ms"
	I0729 18:27:05.893379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.82µs"
	I0729 18:27:45.797927       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-jj2s8"
	I0729 18:27:45.822505       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-jj2s8"
	I0729 18:27:45.822740       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-nwpsp"
	I0729 18:27:45.847145       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-nwpsp"
	
	
	==> kube-proxy [d01c9ad4df1fab0ab613b4207329e5ba2139310e3edf3d01a207fd27a46a48df] <==
	I0729 18:24:43.442347       1 server_linux.go:69] "Using iptables proxy"
	E0729 18:24:47.526647       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/multinode-976328\": dial tcp 192.168.39.211:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.211:46198->192.168.39.211:8443: read: connection reset by peer"
	E0729 18:24:48.707078       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/multinode-976328\": dial tcp 192.168.39.211:8443: connect: connection refused"
	I0729 18:24:52.554978       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.211"]
	I0729 18:24:52.623431       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:24:52.623473       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:24:52.623490       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:24:52.625986       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:24:52.626296       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:24:52.626542       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:24:52.627933       1 config.go:192] "Starting service config controller"
	I0729 18:24:52.627986       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:24:52.628024       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:24:52.628040       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:24:52.628745       1 config.go:319] "Starting node config controller"
	I0729 18:24:52.630237       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:24:52.728802       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:24:52.728818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:24:52.730367       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fd327222d7f72a72b500b2390cae8ebae3740bf6da2f97a1da3042c6b15897ac] <==
	I0729 18:18:08.719117       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:18:08.768098       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.211"]
	I0729 18:18:08.897958       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:18:08.898008       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:18:08.898026       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:18:08.905548       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:18:08.906950       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:18:08.907104       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:18:08.910558       1 config.go:192] "Starting service config controller"
	I0729 18:18:08.911104       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:18:08.911306       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:18:08.911315       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:18:08.913853       1 config.go:319] "Starting node config controller"
	I0729 18:18:08.913860       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:18:09.011564       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:18:09.011619       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:18:09.014775       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [192bdf369e557b0185eaaf55a2b84baee3b192d72ca04820b103f49a17302a92] <==
	I0729 18:24:50.466788       1 serving.go:380] Generated self-signed cert in-memory
	W0729 18:24:52.487443       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 18:24:52.487541       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:24:52.487577       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:24:52.487655       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:24:52.552582       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 18:24:52.552677       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:24:52.558969       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 18:24:52.577553       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:24:52.577643       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 18:24:52.577672       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 18:24:52.678667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8b1718df722bf8960f9a0f853064dc20e706ade11255ddaffd2d6787c83ca9ed] <==
	I0729 18:24:36.728896       1 serving.go:380] Generated self-signed cert in-memory
	W0729 18:24:47.524611       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.39.211:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.211:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.211:40772->192.168.39.211:8443: read: connection reset by peer
	W0729 18:24:47.524795       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:24:47.524825       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:24:47.545086       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 18:24:47.545126       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:24:47.546578       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:24:47.546639       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 18:24:47.546674       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 18:24:47.546698       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:24:47.546716       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0729 18:24:47.546799       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0729 18:24:47.547006       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.392416    3871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f226ace9-e1df-4171-bd7a-80c663032a34-lib-modules\") pod \"kindnet-ttmqz\" (UID: \"f226ace9-e1df-4171-bd7a-80c663032a34\") " pod="kube-system/kindnet-ttmqz"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.392462    3871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f226ace9-e1df-4171-bd7a-80c663032a34-xtables-lock\") pod \"kindnet-ttmqz\" (UID: \"f226ace9-e1df-4171-bd7a-80c663032a34\") " pod="kube-system/kindnet-ttmqz"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.392492    3871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d116a5b3-2d88-4c19-862a-ce4e6100b5c9-xtables-lock\") pod \"kube-proxy-5hqrk\" (UID: \"d116a5b3-2d88-4c19-862a-ce4e6100b5c9\") " pod="kube-system/kube-proxy-5hqrk"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.392514    3871 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d116a5b3-2d88-4c19-862a-ce4e6100b5c9-lib-modules\") pod \"kube-proxy-5hqrk\" (UID: \"d116a5b3-2d88-4c19-862a-ce4e6100b5c9\") " pod="kube-system/kube-proxy-5hqrk"
	Jul 29 18:24:53 multinode-976328 kubelet[3871]: I0729 18:24:53.560063    3871 scope.go:117] "RemoveContainer" containerID="166885d3e009fa3e0be8b5922d734ccb35202f737b5efbd4ececb4b8ac56acec"
	Jul 29 18:25:49 multinode-976328 kubelet[3871]: E0729 18:25:49.350277    3871 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:25:49 multinode-976328 kubelet[3871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:25:49 multinode-976328 kubelet[3871]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:25:49 multinode-976328 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:25:49 multinode-976328 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:26:49 multinode-976328 kubelet[3871]: E0729 18:26:49.349301    3871 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:26:49 multinode-976328 kubelet[3871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:26:49 multinode-976328 kubelet[3871]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:26:49 multinode-976328 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:26:49 multinode-976328 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:27:49 multinode-976328 kubelet[3871]: E0729 18:27:49.348576    3871 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:27:49 multinode-976328 kubelet[3871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:27:49 multinode-976328 kubelet[3871]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:27:49 multinode-976328 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:27:49 multinode-976328 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:28:49 multinode-976328 kubelet[3871]: E0729 18:28:49.349601    3871 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:28:49 multinode-976328 kubelet[3871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:28:49 multinode-976328 kubelet[3871]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:28:49 multinode-976328 kubelet[3871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:28:49 multinode-976328 kubelet[3871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:28:52.757958  125777 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19339-88081/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-976328 -n multinode-976328
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-976328 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.41s)

                                                
                                    
x
+
TestPreload (272.4s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-275412 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0729 18:33:18.903466   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-275412 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m11.343767396s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-275412 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-275412 image pull gcr.io/k8s-minikube/busybox: (1.103681028s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-275412
E0729 18:35:36.382552   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 18:35:53.336845   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-275412: exit status 82 (2m0.445926933s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-275412"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-275412 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-29 18:37:00.022067021 +0000 UTC m=+3829.992770801
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-275412 -n test-preload-275412
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-275412 -n test-preload-275412: exit status 3 (18.519157095s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:37:18.537294  128564 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.28:22: connect: no route to host
	E0729 18:37:18.537327  128564 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.28:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-275412" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-275412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-275412
--- FAIL: TestPreload (272.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (426.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695907 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-695907 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m36.517919791s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-695907] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-695907" primary control-plane node in "kubernetes-upgrade-695907" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:39:19.099284  132010 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:39:19.099411  132010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:39:19.099422  132010 out.go:304] Setting ErrFile to fd 2...
	I0729 18:39:19.099428  132010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:39:19.099621  132010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:39:19.100172  132010 out.go:298] Setting JSON to false
	I0729 18:39:19.101076  132010 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12079,"bootTime":1722266280,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:39:19.101139  132010 start.go:139] virtualization: kvm guest
	I0729 18:39:19.103088  132010 out.go:177] * [kubernetes-upgrade-695907] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:39:19.104459  132010 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:39:19.104500  132010 notify.go:220] Checking for updates...
	I0729 18:39:19.106648  132010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:39:19.107777  132010 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:39:19.108788  132010 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:39:19.109983  132010 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:39:19.111215  132010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:39:19.112847  132010 config.go:182] Loaded profile config "NoKubernetes-790573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:39:19.112993  132010 config.go:182] Loaded profile config "force-systemd-env-801126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:39:19.113098  132010 config.go:182] Loaded profile config "offline-crio-778169": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:39:19.113188  132010 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:39:19.145140  132010 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:39:19.146443  132010 start.go:297] selected driver: kvm2
	I0729 18:39:19.146457  132010 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:39:19.146471  132010 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:39:19.147210  132010 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:39:19.147288  132010 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:39:19.162101  132010 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:39:19.162151  132010 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:39:19.162376  132010 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 18:39:19.162402  132010 cni.go:84] Creating CNI manager for ""
	I0729 18:39:19.162412  132010 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:39:19.162426  132010 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:39:19.162492  132010 start.go:340] cluster config:
	{Name:kubernetes-upgrade-695907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-695907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:39:19.162605  132010 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:39:19.164298  132010 out.go:177] * Starting "kubernetes-upgrade-695907" primary control-plane node in "kubernetes-upgrade-695907" cluster
	I0729 18:39:19.165453  132010 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:39:19.165489  132010 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:39:19.165501  132010 cache.go:56] Caching tarball of preloaded images
	I0729 18:39:19.165591  132010 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:39:19.165604  132010 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:39:19.165710  132010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/config.json ...
	I0729 18:39:19.165733  132010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/config.json: {Name:mk5cd385197df06d1f1fab7fc1fc02e0a822ca70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:39:19.165877  132010 start.go:360] acquireMachinesLock for kubernetes-upgrade-695907: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:40:24.073565  132010 start.go:364] duration metric: took 1m4.907652582s to acquireMachinesLock for "kubernetes-upgrade-695907"
	I0729 18:40:24.073625  132010 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-695907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-695907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:40:24.073738  132010 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:40:24.075602  132010 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:40:24.075813  132010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:40:24.075877  132010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:40:24.094122  132010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0729 18:40:24.094693  132010 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:40:24.095390  132010 main.go:141] libmachine: Using API Version  1
	I0729 18:40:24.095414  132010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:40:24.095855  132010 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:40:24.096036  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetMachineName
	I0729 18:40:24.096225  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .DriverName
	I0729 18:40:24.096371  132010 start.go:159] libmachine.API.Create for "kubernetes-upgrade-695907" (driver="kvm2")
	I0729 18:40:24.096401  132010 client.go:168] LocalClient.Create starting
	I0729 18:40:24.096438  132010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 18:40:24.096479  132010 main.go:141] libmachine: Decoding PEM data...
	I0729 18:40:24.096500  132010 main.go:141] libmachine: Parsing certificate...
	I0729 18:40:24.096642  132010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 18:40:24.096671  132010 main.go:141] libmachine: Decoding PEM data...
	I0729 18:40:24.096690  132010 main.go:141] libmachine: Parsing certificate...
	I0729 18:40:24.096717  132010 main.go:141] libmachine: Running pre-create checks...
	I0729 18:40:24.096736  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .PreCreateCheck
	I0729 18:40:24.097093  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetConfigRaw
	I0729 18:40:24.097508  132010 main.go:141] libmachine: Creating machine...
	I0729 18:40:24.097526  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .Create
	I0729 18:40:24.097670  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Creating KVM machine...
	I0729 18:40:24.099073  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found existing default KVM network
	I0729 18:40:24.100081  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:24.099893  132754 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:28:07:91} reservation:<nil>}
	I0729 18:40:24.102480  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:24.102323  132754 network.go:209] skipping subnet 192.168.50.0/24 that is reserved: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 18:40:24.103355  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:24.103255  132754 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:70:41:9d} reservation:<nil>}
	I0729 18:40:24.104233  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:24.104159  132754 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000305220}
	I0729 18:40:24.104313  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | created network xml: 
	I0729 18:40:24.104339  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | <network>
	I0729 18:40:24.104371  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG |   <name>mk-kubernetes-upgrade-695907</name>
	I0729 18:40:24.104409  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG |   <dns enable='no'/>
	I0729 18:40:24.104420  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG |   
	I0729 18:40:24.104430  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0729 18:40:24.104472  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG |     <dhcp>
	I0729 18:40:24.104498  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0729 18:40:24.104513  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG |     </dhcp>
	I0729 18:40:24.104525  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG |   </ip>
	I0729 18:40:24.104535  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG |   
	I0729 18:40:24.104546  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | </network>
	I0729 18:40:24.104586  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | 
	I0729 18:40:24.110174  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | trying to create private KVM network mk-kubernetes-upgrade-695907 192.168.72.0/24...
	I0729 18:40:24.182792  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | private KVM network mk-kubernetes-upgrade-695907 192.168.72.0/24 created
	I0729 18:40:24.182844  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907 ...
	I0729 18:40:24.182860  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:24.182779  132754 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:40:24.182878  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 18:40:24.182929  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 18:40:24.443281  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:24.443092  132754 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/id_rsa...
	I0729 18:40:24.560225  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:24.560079  132754 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/kubernetes-upgrade-695907.rawdisk...
	I0729 18:40:24.560265  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Writing magic tar header
	I0729 18:40:24.560284  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Writing SSH key tar header
	I0729 18:40:24.560297  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:24.560242  132754 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907 ...
	I0729 18:40:24.560370  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907
	I0729 18:40:24.560438  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907 (perms=drwx------)
	I0729 18:40:24.560475  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 18:40:24.560490  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:40:24.560511  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:40:24.560527  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 18:40:24.560538  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:40:24.560548  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:40:24.560561  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 18:40:24.560572  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Checking permissions on dir: /home
	I0729 18:40:24.560588  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Skipping /home - not owner
	I0729 18:40:24.560604  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 18:40:24.560615  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:40:24.560628  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:40:24.560637  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Creating domain...
	I0729 18:40:24.561742  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) define libvirt domain using xml: 
	I0729 18:40:24.561762  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) <domain type='kvm'>
	I0729 18:40:24.561779  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   <name>kubernetes-upgrade-695907</name>
	I0729 18:40:24.561788  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   <memory unit='MiB'>2200</memory>
	I0729 18:40:24.561806  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   <vcpu>2</vcpu>
	I0729 18:40:24.561813  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   <features>
	I0729 18:40:24.561828  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <acpi/>
	I0729 18:40:24.561835  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <apic/>
	I0729 18:40:24.561842  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <pae/>
	I0729 18:40:24.561849  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     
	I0729 18:40:24.561857  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   </features>
	I0729 18:40:24.561868  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   <cpu mode='host-passthrough'>
	I0729 18:40:24.561877  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   
	I0729 18:40:24.561886  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   </cpu>
	I0729 18:40:24.561897  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   <os>
	I0729 18:40:24.561908  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <type>hvm</type>
	I0729 18:40:24.561917  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <boot dev='cdrom'/>
	I0729 18:40:24.561927  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <boot dev='hd'/>
	I0729 18:40:24.561937  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <bootmenu enable='no'/>
	I0729 18:40:24.561945  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   </os>
	I0729 18:40:24.561955  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   <devices>
	I0729 18:40:24.561963  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <disk type='file' device='cdrom'>
	I0729 18:40:24.561978  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/boot2docker.iso'/>
	I0729 18:40:24.561986  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <target dev='hdc' bus='scsi'/>
	I0729 18:40:24.561995  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <readonly/>
	I0729 18:40:24.562005  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     </disk>
	I0729 18:40:24.562013  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <disk type='file' device='disk'>
	I0729 18:40:24.562025  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:40:24.562043  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/kubernetes-upgrade-695907.rawdisk'/>
	I0729 18:40:24.562055  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <target dev='hda' bus='virtio'/>
	I0729 18:40:24.562066  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     </disk>
	I0729 18:40:24.562086  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <interface type='network'>
	I0729 18:40:24.562100  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <source network='mk-kubernetes-upgrade-695907'/>
	I0729 18:40:24.562110  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <model type='virtio'/>
	I0729 18:40:24.562122  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     </interface>
	I0729 18:40:24.562132  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <interface type='network'>
	I0729 18:40:24.562145  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <source network='default'/>
	I0729 18:40:24.562156  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <model type='virtio'/>
	I0729 18:40:24.562171  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     </interface>
	I0729 18:40:24.562182  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <serial type='pty'>
	I0729 18:40:24.562191  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <target port='0'/>
	I0729 18:40:24.562200  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     </serial>
	I0729 18:40:24.562209  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <console type='pty'>
	I0729 18:40:24.562219  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <target type='serial' port='0'/>
	I0729 18:40:24.562231  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     </console>
	I0729 18:40:24.562241  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     <rng model='virtio'>
	I0729 18:40:24.562251  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)       <backend model='random'>/dev/random</backend>
	I0729 18:40:24.562261  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     </rng>
	I0729 18:40:24.562271  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     
	I0729 18:40:24.562281  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)     
	I0729 18:40:24.562289  132010 main.go:141] libmachine: (kubernetes-upgrade-695907)   </devices>
	I0729 18:40:24.562300  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) </domain>
	I0729 18:40:24.562311  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) 
	I0729 18:40:24.566539  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:53:50 in network default
	I0729 18:40:24.567126  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Ensuring networks are active...
	I0729 18:40:24.567151  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:24.568006  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Ensuring network default is active
	I0729 18:40:24.568393  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Ensuring network mk-kubernetes-upgrade-695907 is active
	I0729 18:40:24.568968  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Getting domain xml...
	I0729 18:40:24.569802  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Creating domain...
	I0729 18:40:24.975068  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Waiting to get IP...
	I0729 18:40:24.975935  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:24.976398  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:24.976425  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:24.976380  132754 retry.go:31] will retry after 188.926346ms: waiting for machine to come up
	I0729 18:40:25.167019  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:25.167526  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:25.167558  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:25.167469  132754 retry.go:31] will retry after 262.759977ms: waiting for machine to come up
	I0729 18:40:25.432431  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:25.432871  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:25.432911  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:25.432838  132754 retry.go:31] will retry after 416.65647ms: waiting for machine to come up
	I0729 18:40:25.851675  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:25.852131  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:25.852160  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:25.852093  132754 retry.go:31] will retry after 585.571587ms: waiting for machine to come up
	I0729 18:40:26.439081  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:26.439666  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:26.439695  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:26.439623  132754 retry.go:31] will retry after 477.743889ms: waiting for machine to come up
	I0729 18:40:26.919459  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:26.919997  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:26.920031  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:26.919951  132754 retry.go:31] will retry after 810.904915ms: waiting for machine to come up
	I0729 18:40:27.732955  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:27.733414  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:27.733438  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:27.733376  132754 retry.go:31] will retry after 937.993089ms: waiting for machine to come up
	I0729 18:40:28.672610  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:28.673112  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:28.673143  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:28.673064  132754 retry.go:31] will retry after 1.103513466s: waiting for machine to come up
	I0729 18:40:29.777946  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:29.778390  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:29.778428  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:29.778344  132754 retry.go:31] will retry after 1.786184358s: waiting for machine to come up
	I0729 18:40:31.566117  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:31.566715  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:31.566767  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:31.566676  132754 retry.go:31] will retry after 1.548498439s: waiting for machine to come up
	I0729 18:40:33.117531  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:33.118029  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:33.118060  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:33.117969  132754 retry.go:31] will retry after 2.798465775s: waiting for machine to come up
	I0729 18:40:35.918312  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:35.918837  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:35.918869  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:35.918768  132754 retry.go:31] will retry after 3.54894959s: waiting for machine to come up
	I0729 18:40:39.468792  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:39.469211  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:39.469235  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:39.469177  132754 retry.go:31] will retry after 3.426040458s: waiting for machine to come up
	I0729 18:40:42.897978  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:42.898499  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find current IP address of domain kubernetes-upgrade-695907 in network mk-kubernetes-upgrade-695907
	I0729 18:40:42.898523  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | I0729 18:40:42.898446  132754 retry.go:31] will retry after 4.719925291s: waiting for machine to come up
	I0729 18:40:47.621949  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:47.622418  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Found IP for machine: 192.168.72.224
	I0729 18:40:47.622446  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has current primary IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:47.622474  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Reserving static IP address...
	I0729 18:40:47.622816  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-695907", mac: "52:54:00:8f:91:a2", ip: "192.168.72.224"} in network mk-kubernetes-upgrade-695907
	I0729 18:40:47.696307  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Getting to WaitForSSH function...
	I0729 18:40:47.696342  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Reserved static IP address: 192.168.72.224
	I0729 18:40:47.696356  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Waiting for SSH to be available...
	I0729 18:40:47.699418  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:47.699876  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:47.699910  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:47.700047  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Using SSH client type: external
	I0729 18:40:47.700076  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/id_rsa (-rw-------)
	I0729 18:40:47.700116  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:40:47.700134  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | About to run SSH command:
	I0729 18:40:47.700150  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | exit 0
	I0729 18:40:47.825252  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | SSH cmd err, output: <nil>: 
	I0729 18:40:47.825562  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) KVM machine creation complete!
	I0729 18:40:47.826043  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetConfigRaw
	I0729 18:40:47.826660  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .DriverName
	I0729 18:40:47.826875  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .DriverName
	I0729 18:40:47.827064  132010 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:40:47.827081  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetState
	I0729 18:40:47.828347  132010 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:40:47.828360  132010 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:40:47.828365  132010 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:40:47.828371  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:47.830700  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:47.831050  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:47.831078  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:47.831202  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:40:47.831386  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:47.831542  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:47.831704  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:40:47.831906  132010 main.go:141] libmachine: Using SSH client type: native
	I0729 18:40:47.832111  132010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0729 18:40:47.832124  132010 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:40:47.936189  132010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:40:47.936214  132010 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:40:47.936224  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:47.939273  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:47.939679  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:47.939709  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:47.939828  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:40:47.940048  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:47.940241  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:47.940416  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:40:47.940645  132010 main.go:141] libmachine: Using SSH client type: native
	I0729 18:40:47.940834  132010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0729 18:40:47.940845  132010 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:40:48.045630  132010 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:40:48.045711  132010 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:40:48.045724  132010 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:40:48.045735  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetMachineName
	I0729 18:40:48.046016  132010 buildroot.go:166] provisioning hostname "kubernetes-upgrade-695907"
	I0729 18:40:48.046039  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetMachineName
	I0729 18:40:48.046261  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:48.049190  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.049592  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:48.049615  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.049714  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:40:48.049908  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:48.050098  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:48.050259  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:40:48.050447  132010 main.go:141] libmachine: Using SSH client type: native
	I0729 18:40:48.050683  132010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0729 18:40:48.050703  132010 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-695907 && echo "kubernetes-upgrade-695907" | sudo tee /etc/hostname
	I0729 18:40:48.171847  132010 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-695907
	
	I0729 18:40:48.171901  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:48.175004  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.175341  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:48.175379  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.175556  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:40:48.175768  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:48.175939  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:48.176081  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:40:48.176295  132010 main.go:141] libmachine: Using SSH client type: native
	I0729 18:40:48.176461  132010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0729 18:40:48.176480  132010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-695907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-695907/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-695907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:40:48.290516  132010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:40:48.290550  132010 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 18:40:48.290627  132010 buildroot.go:174] setting up certificates
	I0729 18:40:48.290640  132010 provision.go:84] configureAuth start
	I0729 18:40:48.290658  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetMachineName
	I0729 18:40:48.290973  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetIP
	I0729 18:40:48.293860  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.294328  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:48.294356  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.294561  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:48.296800  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.297189  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:48.297217  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.297346  132010 provision.go:143] copyHostCerts
	I0729 18:40:48.297422  132010 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 18:40:48.297436  132010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:40:48.297508  132010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 18:40:48.297648  132010 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 18:40:48.297659  132010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:40:48.297683  132010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 18:40:48.297753  132010 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 18:40:48.297766  132010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:40:48.297806  132010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 18:40:48.297884  132010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-695907 san=[127.0.0.1 192.168.72.224 kubernetes-upgrade-695907 localhost minikube]
	I0729 18:40:48.520902  132010 provision.go:177] copyRemoteCerts
	I0729 18:40:48.520970  132010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:40:48.520998  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:48.523645  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.523939  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:48.523968  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.524091  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:40:48.524300  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:48.524472  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:40:48.524622  132010 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/id_rsa Username:docker}
	I0729 18:40:48.611079  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:40:48.642020  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 18:40:48.667544  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:40:48.692010  132010 provision.go:87] duration metric: took 401.352828ms to configureAuth
	I0729 18:40:48.692059  132010 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:40:48.692231  132010 config.go:182] Loaded profile config "kubernetes-upgrade-695907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:40:48.692319  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:48.695078  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.695473  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:48.695513  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.695715  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:40:48.695934  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:48.696112  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:48.696242  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:40:48.696426  132010 main.go:141] libmachine: Using SSH client type: native
	I0729 18:40:48.696674  132010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0729 18:40:48.696695  132010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:40:48.958782  132010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:40:48.958838  132010 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:40:48.958852  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetURL
	I0729 18:40:48.960166  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Using libvirt version 6000000
	I0729 18:40:48.962746  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.963058  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:48.963101  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.963237  132010 main.go:141] libmachine: Docker is up and running!
	I0729 18:40:48.963255  132010 main.go:141] libmachine: Reticulating splines...
	I0729 18:40:48.963264  132010 client.go:171] duration metric: took 24.866851453s to LocalClient.Create
	I0729 18:40:48.963285  132010 start.go:167] duration metric: took 24.866914856s to libmachine.API.Create "kubernetes-upgrade-695907"
	I0729 18:40:48.963296  132010 start.go:293] postStartSetup for "kubernetes-upgrade-695907" (driver="kvm2")
	I0729 18:40:48.963308  132010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:40:48.963332  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .DriverName
	I0729 18:40:48.963572  132010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:40:48.963603  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:48.965957  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.966302  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:48.966369  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:48.966469  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:40:48.966665  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:48.966846  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:40:48.967019  132010 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/id_rsa Username:docker}
	I0729 18:40:49.048781  132010 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:40:49.053070  132010 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:40:49.053095  132010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:40:49.053155  132010 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:40:49.053244  132010 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:40:49.053334  132010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:40:49.064625  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:40:49.089040  132010 start.go:296] duration metric: took 125.730457ms for postStartSetup
	I0729 18:40:49.089085  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetConfigRaw
	I0729 18:40:49.089644  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetIP
	I0729 18:40:49.092331  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:49.092670  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:49.092703  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:49.092948  132010 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/config.json ...
	I0729 18:40:49.093183  132010 start.go:128] duration metric: took 25.019429626s to createHost
	I0729 18:40:49.093212  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:49.095496  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:49.095868  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:49.095896  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:49.096029  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:40:49.096257  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:49.096456  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:49.096625  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:40:49.096837  132010 main.go:141] libmachine: Using SSH client type: native
	I0729 18:40:49.097083  132010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0729 18:40:49.097099  132010 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 18:40:49.201733  132010 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278449.164372191
	
	I0729 18:40:49.201761  132010 fix.go:216] guest clock: 1722278449.164372191
	I0729 18:40:49.201768  132010 fix.go:229] Guest: 2024-07-29 18:40:49.164372191 +0000 UTC Remote: 2024-07-29 18:40:49.093198042 +0000 UTC m=+90.028440949 (delta=71.174149ms)
	I0729 18:40:49.201812  132010 fix.go:200] guest clock delta is within tolerance: 71.174149ms
	I0729 18:40:49.201820  132010 start.go:83] releasing machines lock for "kubernetes-upgrade-695907", held for 25.12822421s
	I0729 18:40:49.201858  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .DriverName
	I0729 18:40:49.202115  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetIP
	I0729 18:40:49.204992  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:49.205431  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:49.205464  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:49.205624  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .DriverName
	I0729 18:40:49.206352  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .DriverName
	I0729 18:40:49.206563  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .DriverName
	I0729 18:40:49.206663  132010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:40:49.206714  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:49.206818  132010 ssh_runner.go:195] Run: cat /version.json
	I0729 18:40:49.206843  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:40:49.209447  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:49.209800  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:49.209825  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:49.209871  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:49.210052  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:40:49.210287  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:49.210314  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:49.210333  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:49.210481  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:40:49.210547  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:40:49.210626  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:40:49.210733  132010 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/id_rsa Username:docker}
	I0729 18:40:49.210794  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:40:49.210940  132010 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/id_rsa Username:docker}
	I0729 18:40:49.321447  132010 ssh_runner.go:195] Run: systemctl --version
	I0729 18:40:49.328530  132010 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:40:49.488279  132010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:40:49.496819  132010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:40:49.496906  132010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:40:49.520463  132010 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:40:49.520486  132010 start.go:495] detecting cgroup driver to use...
	I0729 18:40:49.520567  132010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:40:49.541048  132010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:40:49.555632  132010 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:40:49.555695  132010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:40:49.569984  132010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:40:49.583985  132010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:40:49.723388  132010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:40:49.898084  132010 docker.go:233] disabling docker service ...
	I0729 18:40:49.898149  132010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:40:49.914462  132010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:40:49.928604  132010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:40:50.068882  132010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:40:50.201628  132010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:40:50.218000  132010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:40:50.237582  132010 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:40:50.237654  132010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:40:50.248607  132010 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:40:50.248674  132010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:40:50.260776  132010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:40:50.272952  132010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:40:50.283835  132010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:40:50.294981  132010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:40:50.304255  132010 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:40:50.304319  132010 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:40:50.317950  132010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:40:50.327983  132010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:40:50.446753  132010 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:40:50.598986  132010 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:40:50.599053  132010 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:40:50.604530  132010 start.go:563] Will wait 60s for crictl version
	I0729 18:40:50.604600  132010 ssh_runner.go:195] Run: which crictl
	I0729 18:40:50.608830  132010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:40:50.650260  132010 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:40:50.650340  132010 ssh_runner.go:195] Run: crio --version
	I0729 18:40:50.684190  132010 ssh_runner.go:195] Run: crio --version
	I0729 18:40:50.718985  132010 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:40:50.720185  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetIP
	I0729 18:40:50.723510  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:50.723979  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:40:50.724013  132010 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:40:50.724241  132010 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 18:40:50.729064  132010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:40:50.745238  132010 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-695907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-695907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.224 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:40:50.745354  132010 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:40:50.745422  132010 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:40:50.793079  132010 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:40:50.793155  132010 ssh_runner.go:195] Run: which lz4
	I0729 18:40:50.797973  132010 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 18:40:50.805470  132010 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:40:50.805507  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:40:52.642188  132010 crio.go:462] duration metric: took 1.844249892s to copy over tarball
	I0729 18:40:52.642284  132010 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:40:55.309081  132010 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.666756242s)
	I0729 18:40:55.309114  132010 crio.go:469] duration metric: took 2.666896029s to extract the tarball
	I0729 18:40:55.309124  132010 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:40:55.353642  132010 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:40:55.408576  132010 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:40:55.408605  132010 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:40:55.408672  132010 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:40:55.408695  132010 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:40:55.408701  132010 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:40:55.408752  132010 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:40:55.408794  132010 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:40:55.408962  132010 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:40:55.409004  132010 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:40:55.408981  132010 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:40:55.410704  132010 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:40:55.410800  132010 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:40:55.410852  132010 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:40:55.410910  132010 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:40:55.410929  132010 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:40:55.410711  132010 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:40:55.410862  132010 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:40:55.411107  132010 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:40:55.723170  132010 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:40:56.308887  132010 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:40:56.311064  132010 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:40:56.328761  132010 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:40:56.330695  132010 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:40:56.335544  132010 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:40:56.359667  132010 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:40:56.367983  132010 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:40:56.410825  132010 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:40:56.410856  132010 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:40:56.410881  132010 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:40:56.410891  132010 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:40:56.410936  132010 ssh_runner.go:195] Run: which crictl
	I0729 18:40:56.410937  132010 ssh_runner.go:195] Run: which crictl
	I0729 18:40:56.496769  132010 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:40:56.496827  132010 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:40:56.496873  132010 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:40:56.496891  132010 ssh_runner.go:195] Run: which crictl
	I0729 18:40:56.496912  132010 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:40:56.496924  132010 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:40:56.496949  132010 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:40:56.496969  132010 ssh_runner.go:195] Run: which crictl
	I0729 18:40:56.496999  132010 ssh_runner.go:195] Run: which crictl
	I0729 18:40:56.499386  132010 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:40:56.499421  132010 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:40:56.499458  132010 ssh_runner.go:195] Run: which crictl
	I0729 18:40:56.513951  132010 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:40:56.513999  132010 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:40:56.514003  132010 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:40:56.514015  132010 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:40:56.514037  132010 ssh_runner.go:195] Run: which crictl
	I0729 18:40:56.514062  132010 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:40:56.514099  132010 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:40:56.514120  132010 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:40:56.514150  132010 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:40:56.638203  132010 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:40:56.685108  132010 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:40:56.685131  132010 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:40:56.685163  132010 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:40:56.702572  132010 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:40:56.702657  132010 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:40:56.702665  132010 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:40:56.702737  132010 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:40:56.702811  132010 cache_images.go:92] duration metric: took 1.294189929s to LoadCachedImages
	W0729 18:40:56.702907  132010 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0729 18:40:56.702924  132010 kubeadm.go:934] updating node { 192.168.72.224 8443 v1.20.0 crio true true} ...
	I0729 18:40:56.703051  132010 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-695907 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-695907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:40:56.703169  132010 ssh_runner.go:195] Run: crio config
	I0729 18:40:56.758424  132010 cni.go:84] Creating CNI manager for ""
	I0729 18:40:56.758500  132010 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:40:56.758517  132010 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:40:56.758547  132010 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.224 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-695907 NodeName:kubernetes-upgrade-695907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:40:56.758746  132010 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.224
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-695907"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:40:56.758824  132010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:40:56.769684  132010 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:40:56.769762  132010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:40:56.779837  132010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0729 18:40:56.801070  132010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:40:56.821606  132010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0729 18:40:56.843562  132010 ssh_runner.go:195] Run: grep 192.168.72.224	control-plane.minikube.internal$ /etc/hosts
	I0729 18:40:56.847876  132010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:40:56.861512  132010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:40:56.986346  132010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:40:57.004266  132010 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907 for IP: 192.168.72.224
	I0729 18:40:57.004291  132010 certs.go:194] generating shared ca certs ...
	I0729 18:40:57.004311  132010 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:40:57.004479  132010 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 18:40:57.004544  132010 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 18:40:57.004559  132010 certs.go:256] generating profile certs ...
	I0729 18:40:57.004630  132010 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/client.key
	I0729 18:40:57.004648  132010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/client.crt with IP's: []
	I0729 18:40:57.315771  132010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/client.crt ...
	I0729 18:40:57.315817  132010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/client.crt: {Name:mka35ef1ce4360bcb721188c56c2cfbad30c31aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:40:57.316006  132010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/client.key ...
	I0729 18:40:57.316027  132010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/client.key: {Name:mk18c655b9d8b8b557efc7ab4b341fa1cd033e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:40:57.316130  132010 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.key.e8d0a759
	I0729 18:40:57.316151  132010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.crt.e8d0a759 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.224]
	I0729 18:40:57.658280  132010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.crt.e8d0a759 ...
	I0729 18:40:57.658330  132010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.crt.e8d0a759: {Name:mk23702b6671e4441aafffa36d98cf2678ca0aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:40:57.706482  132010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.key.e8d0a759 ...
	I0729 18:40:57.706521  132010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.key.e8d0a759: {Name:mkf8487494e390dc482215916fea6a3263356b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:40:57.706687  132010 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.crt.e8d0a759 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.crt
	I0729 18:40:57.706835  132010 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.key.e8d0a759 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.key
	I0729 18:40:57.706926  132010 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/proxy-client.key
	I0729 18:40:57.706949  132010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/proxy-client.crt with IP's: []
	I0729 18:40:58.011328  132010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/proxy-client.crt ...
	I0729 18:40:58.011368  132010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/proxy-client.crt: {Name:mkf2883f0facbbb859807c5b95e1fc242ba58f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:40:58.055827  132010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/proxy-client.key ...
	I0729 18:40:58.055888  132010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/proxy-client.key: {Name:mk487f1d71b9558c468301cfa7325444d31148f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:40:58.056228  132010 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 18:40:58.056292  132010 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 18:40:58.056310  132010 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:40:58.056344  132010 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:40:58.056378  132010 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:40:58.056411  132010 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 18:40:58.056468  132010 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:40:58.057229  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:40:58.087636  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:40:58.126782  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:40:58.159632  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:40:58.196369  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 18:40:58.227322  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:40:58.253703  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:40:58.281260  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:40:58.307920  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 18:40:58.334595  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:40:58.363129  132010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 18:40:58.389670  132010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:40:58.408043  132010 ssh_runner.go:195] Run: openssl version
	I0729 18:40:58.414501  132010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 18:40:58.427165  132010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 18:40:58.431806  132010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:40:58.431861  132010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 18:40:58.440083  132010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:40:58.455116  132010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:40:58.466514  132010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:40:58.471141  132010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:40:58.471185  132010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:40:58.476718  132010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:40:58.487368  132010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 18:40:58.498455  132010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 18:40:58.503094  132010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:40:58.503141  132010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 18:40:58.509467  132010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 18:40:58.524288  132010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:40:58.528788  132010 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:40:58.528852  132010 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-695907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-695907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.224 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:40:58.528953  132010 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:40:58.529005  132010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:40:58.567397  132010 cri.go:89] found id: ""
	I0729 18:40:58.567480  132010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:40:58.577875  132010 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:40:58.588401  132010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:40:58.600964  132010 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:40:58.600990  132010 kubeadm.go:157] found existing configuration files:
	
	I0729 18:40:58.601048  132010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:40:58.610539  132010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:40:58.610589  132010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:40:58.620463  132010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:40:58.630264  132010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:40:58.630322  132010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:40:58.640105  132010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:40:58.649938  132010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:40:58.649984  132010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:40:58.660212  132010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:40:58.670016  132010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:40:58.670076  132010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:40:58.680026  132010 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:40:58.816444  132010 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:40:58.816969  132010 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:40:59.006200  132010 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:40:59.006355  132010 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:40:59.006548  132010 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:40:59.199742  132010 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:40:59.231116  132010 out.go:204]   - Generating certificates and keys ...
	I0729 18:40:59.231322  132010 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:40:59.231470  132010 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:40:59.372111  132010 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 18:40:59.547936  132010 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 18:40:59.827458  132010 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 18:41:00.153338  132010 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 18:41:00.416566  132010 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 18:41:00.416716  132010 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-695907 localhost] and IPs [192.168.72.224 127.0.0.1 ::1]
	I0729 18:41:00.761603  132010 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 18:41:00.761950  132010 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-695907 localhost] and IPs [192.168.72.224 127.0.0.1 ::1]
	I0729 18:41:00.857990  132010 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 18:41:00.927704  132010 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 18:41:01.004834  132010 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 18:41:01.005173  132010 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:41:01.161591  132010 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:41:01.426409  132010 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:41:01.701293  132010 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:41:01.835237  132010 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:41:01.851761  132010 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:41:01.852670  132010 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:41:01.852736  132010 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:41:01.986720  132010 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:41:01.988552  132010 out.go:204]   - Booting up control plane ...
	I0729 18:41:01.988674  132010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:41:01.995629  132010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:41:01.996545  132010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:41:01.997371  132010 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:41:02.001216  132010 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:41:41.988483  132010 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:41:41.989337  132010 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:41:41.989527  132010 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:41:46.989569  132010 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:41:46.989854  132010 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:41:56.989413  132010 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:41:56.989589  132010 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:42:16.990257  132010 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:42:16.990473  132010 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:42:56.991227  132010 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:42:56.991514  132010 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:42:56.991549  132010 kubeadm.go:310] 
	I0729 18:42:56.991613  132010 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:42:56.991688  132010 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:42:56.991706  132010 kubeadm.go:310] 
	I0729 18:42:56.991761  132010 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:42:56.991807  132010 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:42:56.991975  132010 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:42:56.991986  132010 kubeadm.go:310] 
	I0729 18:42:56.992150  132010 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:42:56.992197  132010 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:42:56.992250  132010 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:42:56.992259  132010 kubeadm.go:310] 
	I0729 18:42:56.992378  132010 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:42:56.992484  132010 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:42:56.992498  132010 kubeadm.go:310] 
	I0729 18:42:56.992653  132010 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:42:56.992771  132010 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:42:56.992892  132010 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:42:56.992987  132010 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:42:56.993000  132010 kubeadm.go:310] 
	I0729 18:42:56.993530  132010 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:42:56.993644  132010 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:42:56.993726  132010 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 18:42:56.993889  132010 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-695907 localhost] and IPs [192.168.72.224 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-695907 localhost] and IPs [192.168.72.224 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-695907 localhost] and IPs [192.168.72.224 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-695907 localhost] and IPs [192.168.72.224 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:42:56.993946  132010 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:42:58.286860  132010 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.292878714s)
	I0729 18:42:58.286945  132010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:42:58.302078  132010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:42:58.312207  132010 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:42:58.312230  132010 kubeadm.go:157] found existing configuration files:
	
	I0729 18:42:58.312287  132010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:42:58.321472  132010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:42:58.321534  132010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:42:58.331367  132010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:42:58.340651  132010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:42:58.340730  132010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:42:58.350422  132010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:42:58.359801  132010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:42:58.359860  132010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:42:58.372876  132010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:42:58.385516  132010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:42:58.385576  132010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:42:58.395016  132010 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:42:58.468656  132010 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:42:58.468753  132010 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:42:58.626265  132010 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:42:58.626420  132010 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:42:58.626541  132010 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:42:58.840847  132010 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:42:58.843692  132010 out.go:204]   - Generating certificates and keys ...
	I0729 18:42:58.843806  132010 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:42:58.843905  132010 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:42:58.844022  132010 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:42:58.844102  132010 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:42:58.844189  132010 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:42:58.844264  132010 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:42:58.844345  132010 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:42:58.844707  132010 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:42:58.845288  132010 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:42:58.845784  132010 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:42:58.845844  132010 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:42:58.845909  132010 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:42:58.956944  132010 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:42:59.275154  132010 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:42:59.448498  132010 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:42:59.648330  132010 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:42:59.662512  132010 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:42:59.663657  132010 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:42:59.663746  132010 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:42:59.805012  132010 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:42:59.806826  132010 out.go:204]   - Booting up control plane ...
	I0729 18:42:59.806970  132010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:42:59.818700  132010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:42:59.820167  132010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:42:59.821358  132010 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:42:59.824291  132010 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:43:39.822766  132010 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:43:39.823293  132010 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:43:39.823589  132010 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:43:44.823663  132010 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:43:44.823958  132010 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:43:54.824094  132010 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:43:54.824392  132010 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:44:14.824945  132010 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:44:14.825222  132010 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:44:54.826867  132010 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:44:54.827180  132010 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:44:54.827198  132010 kubeadm.go:310] 
	I0729 18:44:54.827248  132010 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:44:54.827300  132010 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:44:54.827310  132010 kubeadm.go:310] 
	I0729 18:44:54.827353  132010 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:44:54.827401  132010 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:44:54.827570  132010 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:44:54.827587  132010 kubeadm.go:310] 
	I0729 18:44:54.827740  132010 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:44:54.827789  132010 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:44:54.827865  132010 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:44:54.827883  132010 kubeadm.go:310] 
	I0729 18:44:54.828012  132010 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:44:54.828102  132010 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:44:54.828109  132010 kubeadm.go:310] 
	I0729 18:44:54.828227  132010 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:44:54.828328  132010 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:44:54.828416  132010 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:44:54.828507  132010 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:44:54.828513  132010 kubeadm.go:310] 
	I0729 18:44:54.829398  132010 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:44:54.829528  132010 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:44:54.829655  132010 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:44:54.829701  132010 kubeadm.go:394] duration metric: took 3m56.300852826s to StartCluster
	I0729 18:44:54.829758  132010 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:44:54.829828  132010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:44:54.875666  132010 cri.go:89] found id: ""
	I0729 18:44:54.875696  132010 logs.go:276] 0 containers: []
	W0729 18:44:54.875706  132010 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:44:54.875714  132010 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:44:54.875792  132010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:44:54.917825  132010 cri.go:89] found id: ""
	I0729 18:44:54.917853  132010 logs.go:276] 0 containers: []
	W0729 18:44:54.917864  132010 logs.go:278] No container was found matching "etcd"
	I0729 18:44:54.917872  132010 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:44:54.917934  132010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:44:54.968936  132010 cri.go:89] found id: ""
	I0729 18:44:54.968965  132010 logs.go:276] 0 containers: []
	W0729 18:44:54.968975  132010 logs.go:278] No container was found matching "coredns"
	I0729 18:44:54.968983  132010 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:44:54.969051  132010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:44:55.014172  132010 cri.go:89] found id: ""
	I0729 18:44:55.014214  132010 logs.go:276] 0 containers: []
	W0729 18:44:55.014225  132010 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:44:55.014234  132010 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:44:55.014305  132010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:44:55.060832  132010 cri.go:89] found id: ""
	I0729 18:44:55.060882  132010 logs.go:276] 0 containers: []
	W0729 18:44:55.060894  132010 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:44:55.060903  132010 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:44:55.060975  132010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:44:55.108393  132010 cri.go:89] found id: ""
	I0729 18:44:55.108425  132010 logs.go:276] 0 containers: []
	W0729 18:44:55.108436  132010 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:44:55.108444  132010 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:44:55.108509  132010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:44:55.149152  132010 cri.go:89] found id: ""
	I0729 18:44:55.149180  132010 logs.go:276] 0 containers: []
	W0729 18:44:55.149193  132010 logs.go:278] No container was found matching "kindnet"
	I0729 18:44:55.149207  132010 logs.go:123] Gathering logs for dmesg ...
	I0729 18:44:55.149233  132010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:44:55.166626  132010 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:44:55.166660  132010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:44:55.303933  132010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:44:55.303962  132010 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:44:55.303982  132010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:44:55.451216  132010 logs.go:123] Gathering logs for container status ...
	I0729 18:44:55.451263  132010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:44:55.494471  132010 logs.go:123] Gathering logs for kubelet ...
	I0729 18:44:55.494502  132010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 18:44:55.564446  132010 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:44:55.564502  132010 out.go:239] * 
	* 
	W0729 18:44:55.564563  132010 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:44:55.564584  132010 out.go:239] * 
	* 
	W0729 18:44:55.565480  132010 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:44:55.569001  132010 out.go:177] 
	W0729 18:44:55.570367  132010 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:44:55.570434  132010 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:44:55.570471  132010 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:44:55.572227  132010 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-695907 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-695907
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-695907: (1.309257353s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-695907 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-695907 status --format={{.Host}}: exit status 7 (70.317706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695907 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695907 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.537353596s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-695907 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695907 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-695907 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (81.501352ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-695907] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-695907
	    minikube start -p kubernetes-upgrade-695907 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6959072 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-695907 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695907 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695907 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.289251884s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-29 18:46:21.96914682 +0000 UTC m=+4391.939850620
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-695907 -n kubernetes-upgrade-695907
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-695907 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-695907 logs -n 25: (1.889229778s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-085245 sudo systemctl                        | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | status kubelet --all --full                          |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo systemctl                        | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | cat kubelet --no-pager                               |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo journalctl                       | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo cat                              | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo cat                              | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo systemctl                        | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo systemctl                        | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo cat                              | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo docker                           | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo systemctl                        | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo systemctl                        | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo cat                              | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo cat                              | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo                                  | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo systemctl                        | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo systemctl                        | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo cat                              | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo cat                              | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo containerd                       | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo systemctl                        | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo systemctl                        | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo find                             | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-085245 sudo crio                             | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-085245                                       | auto-085245           | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC | 29 Jul 24 18:46 UTC |
	| start   | -p custom-flannel-085245                             | custom-flannel-085245 | jenkins | v1.33.1 | 29 Jul 24 18:46 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:46:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:46:14.841706  139184 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:46:14.842004  139184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:46:14.842018  139184 out.go:304] Setting ErrFile to fd 2...
	I0729 18:46:14.842025  139184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:46:14.842279  139184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:46:14.843089  139184 out.go:298] Setting JSON to false
	I0729 18:46:14.844579  139184 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12495,"bootTime":1722266280,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:46:14.844653  139184 start.go:139] virtualization: kvm guest
	I0729 18:46:14.847014  139184 out.go:177] * [custom-flannel-085245] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:46:14.848463  139184 notify.go:220] Checking for updates...
	I0729 18:46:14.848474  139184 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:46:14.849873  139184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:46:14.851043  139184 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:46:14.852165  139184 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:46:14.853376  139184 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:46:14.854586  139184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:46:14.856314  139184 config.go:182] Loaded profile config "cert-expiration-974855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:46:14.856469  139184 config.go:182] Loaded profile config "kindnet-085245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:46:14.856585  139184 config.go:182] Loaded profile config "kubernetes-upgrade-695907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:46:14.856697  139184 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:46:14.894988  139184 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:46:14.896270  139184 start.go:297] selected driver: kvm2
	I0729 18:46:14.896291  139184 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:46:14.896306  139184 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:46:14.897411  139184 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:46:14.897516  139184 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:46:14.914761  139184 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:46:14.914841  139184 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:46:14.915140  139184 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:46:14.915214  139184 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 18:46:14.915239  139184 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 18:46:14.915310  139184 start.go:340] cluster config:
	{Name:custom-flannel-085245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-085245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:46:14.915461  139184 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:46:14.917087  139184 out.go:177] * Starting "custom-flannel-085245" primary control-plane node in "custom-flannel-085245" cluster
	I0729 18:46:14.918257  139184 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:46:14.918301  139184 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:46:14.918311  139184 cache.go:56] Caching tarball of preloaded images
	I0729 18:46:14.918389  139184 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:46:14.918415  139184 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:46:14.918519  139184 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/config.json ...
	I0729 18:46:14.918540  139184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/config.json: {Name:mkc27d21a5c0fa8f2db2cdaf40216ded5f09df17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:14.918696  139184 start.go:360] acquireMachinesLock for custom-flannel-085245: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:46:14.918731  139184 start.go:364] duration metric: took 18.386µs to acquireMachinesLock for "custom-flannel-085245"
	I0729 18:46:14.918751  139184 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-085245 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-085245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:46:14.918835  139184 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:46:10.726650  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:11.226927  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:11.726526  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:12.226605  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:12.726631  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:13.227019  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:13.727376  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:14.226512  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:14.726606  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:15.227250  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:13.334417  137707 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 024cd52bb30891e372259703e9491bd7d40ff69c57285ce45b27083eaa249ddc 8a419b5f089f45d260a88ea38deffe9f1ea10b7afbdc794b14cc838c067fb724 a1140dcc277cb6f875d67b654c1bece822c33f23b5093841bf57f711c228b5e0 a4c5a4a675df15f692c1ed0845b1bab4833cc12a8936ff55a35fb021408038ea 909fd73182bfda0fb01dd5624aeebe8b58f9a6b4486c434405b03201468dab2c 84ad1e619be9c1a2a13315125eee3c0d41796503789b4374c7d882524a9d850e 17c60b7eab5389b4a9b79767791f16e6020c85c28677bef23208730e1ab1778c 8be27d2594a81b8c9c9e8cd82ec9e1b1116da2beb696e92287bbc1a421eaa843 6e7928ec8dc264f672784e6e23cec1aabf2a31872d2a73653732ca8a868b4934 0f5f5b0be375c80a66b7233bb35908f47f539c1a45cb49ed7a9dacf66bb43a82 463534228ced54ec3851e2e1d75778ee31f1283d971d655d24f00993795c00dd 80a1718dbc15324a6ecc7bd835fb9f5f6ab743059a56ce16c54102cb40ec8fc0 0cb9f3b7b3438dfadf46f05e5ed4afbb6acc033716f53c764ba7712ab89e7a5a: (3.949666592s)
	W0729 18:46:13.334496  137707 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 024cd52bb30891e372259703e9491bd7d40ff69c57285ce45b27083eaa249ddc 8a419b5f089f45d260a88ea38deffe9f1ea10b7afbdc794b14cc838c067fb724 a1140dcc277cb6f875d67b654c1bece822c33f23b5093841bf57f711c228b5e0 a4c5a4a675df15f692c1ed0845b1bab4833cc12a8936ff55a35fb021408038ea 909fd73182bfda0fb01dd5624aeebe8b58f9a6b4486c434405b03201468dab2c 84ad1e619be9c1a2a13315125eee3c0d41796503789b4374c7d882524a9d850e 17c60b7eab5389b4a9b79767791f16e6020c85c28677bef23208730e1ab1778c 8be27d2594a81b8c9c9e8cd82ec9e1b1116da2beb696e92287bbc1a421eaa843 6e7928ec8dc264f672784e6e23cec1aabf2a31872d2a73653732ca8a868b4934 0f5f5b0be375c80a66b7233bb35908f47f539c1a45cb49ed7a9dacf66bb43a82 463534228ced54ec3851e2e1d75778ee31f1283d971d655d24f00993795c00dd 80a1718dbc15324a6ecc7bd835fb9f5f6ab743059a56ce16c54102cb40ec8fc0 0cb9f3b7b3438dfadf46f05e5ed4afbb6acc033716f53c764ba7712ab89e7a5a: Proce
ss exited with status 1
	stdout:
	024cd52bb30891e372259703e9491bd7d40ff69c57285ce45b27083eaa249ddc
	8a419b5f089f45d260a88ea38deffe9f1ea10b7afbdc794b14cc838c067fb724
	a1140dcc277cb6f875d67b654c1bece822c33f23b5093841bf57f711c228b5e0
	a4c5a4a675df15f692c1ed0845b1bab4833cc12a8936ff55a35fb021408038ea
	909fd73182bfda0fb01dd5624aeebe8b58f9a6b4486c434405b03201468dab2c
	84ad1e619be9c1a2a13315125eee3c0d41796503789b4374c7d882524a9d850e
	17c60b7eab5389b4a9b79767791f16e6020c85c28677bef23208730e1ab1778c
	
	stderr:
	E0729 18:46:13.322433    3139 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be27d2594a81b8c9c9e8cd82ec9e1b1116da2beb696e92287bbc1a421eaa843\": container with ID starting with 8be27d2594a81b8c9c9e8cd82ec9e1b1116da2beb696e92287bbc1a421eaa843 not found: ID does not exist" containerID="8be27d2594a81b8c9c9e8cd82ec9e1b1116da2beb696e92287bbc1a421eaa843"
	time="2024-07-29T18:46:13Z" level=fatal msg="stopping the container \"8be27d2594a81b8c9c9e8cd82ec9e1b1116da2beb696e92287bbc1a421eaa843\": rpc error: code = NotFound desc = could not find container \"8be27d2594a81b8c9c9e8cd82ec9e1b1116da2beb696e92287bbc1a421eaa843\": container with ID starting with 8be27d2594a81b8c9c9e8cd82ec9e1b1116da2beb696e92287bbc1a421eaa843 not found: ID does not exist"
	I0729 18:46:13.334612  137707 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:46:13.387303  137707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:46:13.399526  137707 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 29 18:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jul 29 18:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Jul 29 18:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul 29 18:45 /etc/kubernetes/scheduler.conf
	
	I0729 18:46:13.399590  137707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:46:13.409968  137707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:46:13.421027  137707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:46:13.430763  137707 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:46:13.430817  137707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:46:13.440903  137707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:46:13.451361  137707 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:46:13.451431  137707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:46:13.461798  137707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:46:13.471281  137707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:46:13.529874  137707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:46:14.585739  137707 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.055825719s)
	I0729 18:46:14.585783  137707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:46:14.846691  137707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:46:14.918938  137707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:46:15.041836  137707 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:46:15.041925  137707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:46:15.541983  137707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:46:14.920318  139184 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 18:46:14.920514  139184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:14.920564  139184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:14.936519  139184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I0729 18:46:14.937092  139184 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:14.937723  139184 main.go:141] libmachine: Using API Version  1
	I0729 18:46:14.937756  139184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:14.938131  139184 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:14.938383  139184 main.go:141] libmachine: (custom-flannel-085245) Calling .GetMachineName
	I0729 18:46:14.938559  139184 main.go:141] libmachine: (custom-flannel-085245) Calling .DriverName
	I0729 18:46:14.938724  139184 start.go:159] libmachine.API.Create for "custom-flannel-085245" (driver="kvm2")
	I0729 18:46:14.938756  139184 client.go:168] LocalClient.Create starting
	I0729 18:46:14.938791  139184 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 18:46:14.938834  139184 main.go:141] libmachine: Decoding PEM data...
	I0729 18:46:14.938851  139184 main.go:141] libmachine: Parsing certificate...
	I0729 18:46:14.938909  139184 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 18:46:14.938927  139184 main.go:141] libmachine: Decoding PEM data...
	I0729 18:46:14.938936  139184 main.go:141] libmachine: Parsing certificate...
	I0729 18:46:14.938956  139184 main.go:141] libmachine: Running pre-create checks...
	I0729 18:46:14.938973  139184 main.go:141] libmachine: (custom-flannel-085245) Calling .PreCreateCheck
	I0729 18:46:14.939386  139184 main.go:141] libmachine: (custom-flannel-085245) Calling .GetConfigRaw
	I0729 18:46:14.939870  139184 main.go:141] libmachine: Creating machine...
	I0729 18:46:14.939890  139184 main.go:141] libmachine: (custom-flannel-085245) Calling .Create
	I0729 18:46:14.940052  139184 main.go:141] libmachine: (custom-flannel-085245) Creating KVM machine...
	I0729 18:46:14.941570  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | found existing default KVM network
	I0729 18:46:14.943028  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:14.942861  139207 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012df80}
	I0729 18:46:14.943052  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | created network xml: 
	I0729 18:46:14.943077  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | <network>
	I0729 18:46:14.943097  139184 main.go:141] libmachine: (custom-flannel-085245) DBG |   <name>mk-custom-flannel-085245</name>
	I0729 18:46:14.943110  139184 main.go:141] libmachine: (custom-flannel-085245) DBG |   <dns enable='no'/>
	I0729 18:46:14.943117  139184 main.go:141] libmachine: (custom-flannel-085245) DBG |   
	I0729 18:46:14.943124  139184 main.go:141] libmachine: (custom-flannel-085245) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 18:46:14.943131  139184 main.go:141] libmachine: (custom-flannel-085245) DBG |     <dhcp>
	I0729 18:46:14.943138  139184 main.go:141] libmachine: (custom-flannel-085245) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 18:46:14.943144  139184 main.go:141] libmachine: (custom-flannel-085245) DBG |     </dhcp>
	I0729 18:46:14.943151  139184 main.go:141] libmachine: (custom-flannel-085245) DBG |   </ip>
	I0729 18:46:14.943159  139184 main.go:141] libmachine: (custom-flannel-085245) DBG |   
	I0729 18:46:14.943170  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | </network>
	I0729 18:46:14.943184  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | 
	I0729 18:46:14.948836  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | trying to create private KVM network mk-custom-flannel-085245 192.168.39.0/24...
	I0729 18:46:15.025639  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | private KVM network mk-custom-flannel-085245 192.168.39.0/24 created
	I0729 18:46:15.025698  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:15.025590  139207 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:46:15.025711  139184 main.go:141] libmachine: (custom-flannel-085245) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/custom-flannel-085245 ...
	I0729 18:46:15.025739  139184 main.go:141] libmachine: (custom-flannel-085245) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 18:46:15.025761  139184 main.go:141] libmachine: (custom-flannel-085245) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 18:46:15.288645  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:15.288515  139207 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/custom-flannel-085245/id_rsa...
	I0729 18:46:15.430889  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:15.430769  139207 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/custom-flannel-085245/custom-flannel-085245.rawdisk...
	I0729 18:46:15.430924  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | Writing magic tar header
	I0729 18:46:15.430982  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | Writing SSH key tar header
	I0729 18:46:15.431023  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:15.430931  139207 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/custom-flannel-085245 ...
	I0729 18:46:15.431807  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/custom-flannel-085245
	I0729 18:46:15.431840  139184 main.go:141] libmachine: (custom-flannel-085245) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/custom-flannel-085245 (perms=drwx------)
	I0729 18:46:15.431853  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 18:46:15.431872  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:46:15.431884  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 18:46:15.431898  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:46:15.431912  139184 main.go:141] libmachine: (custom-flannel-085245) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:46:15.431924  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:46:15.431936  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | Checking permissions on dir: /home
	I0729 18:46:15.431947  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | Skipping /home - not owner
	I0729 18:46:15.431964  139184 main.go:141] libmachine: (custom-flannel-085245) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 18:46:15.431976  139184 main.go:141] libmachine: (custom-flannel-085245) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 18:46:15.431990  139184 main.go:141] libmachine: (custom-flannel-085245) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:46:15.432007  139184 main.go:141] libmachine: (custom-flannel-085245) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:46:15.432017  139184 main.go:141] libmachine: (custom-flannel-085245) Creating domain...
	I0729 18:46:15.434140  139184 main.go:141] libmachine: (custom-flannel-085245) define libvirt domain using xml: 
	I0729 18:46:15.434194  139184 main.go:141] libmachine: (custom-flannel-085245) <domain type='kvm'>
	I0729 18:46:15.434215  139184 main.go:141] libmachine: (custom-flannel-085245)   <name>custom-flannel-085245</name>
	I0729 18:46:15.434231  139184 main.go:141] libmachine: (custom-flannel-085245)   <memory unit='MiB'>3072</memory>
	I0729 18:46:15.434244  139184 main.go:141] libmachine: (custom-flannel-085245)   <vcpu>2</vcpu>
	I0729 18:46:15.434254  139184 main.go:141] libmachine: (custom-flannel-085245)   <features>
	I0729 18:46:15.434265  139184 main.go:141] libmachine: (custom-flannel-085245)     <acpi/>
	I0729 18:46:15.434275  139184 main.go:141] libmachine: (custom-flannel-085245)     <apic/>
	I0729 18:46:15.434284  139184 main.go:141] libmachine: (custom-flannel-085245)     <pae/>
	I0729 18:46:15.434303  139184 main.go:141] libmachine: (custom-flannel-085245)     
	I0729 18:46:15.434315  139184 main.go:141] libmachine: (custom-flannel-085245)   </features>
	I0729 18:46:15.434325  139184 main.go:141] libmachine: (custom-flannel-085245)   <cpu mode='host-passthrough'>
	I0729 18:46:15.434332  139184 main.go:141] libmachine: (custom-flannel-085245)   
	I0729 18:46:15.434340  139184 main.go:141] libmachine: (custom-flannel-085245)   </cpu>
	I0729 18:46:15.434349  139184 main.go:141] libmachine: (custom-flannel-085245)   <os>
	I0729 18:46:15.434358  139184 main.go:141] libmachine: (custom-flannel-085245)     <type>hvm</type>
	I0729 18:46:15.434368  139184 main.go:141] libmachine: (custom-flannel-085245)     <boot dev='cdrom'/>
	I0729 18:46:15.434376  139184 main.go:141] libmachine: (custom-flannel-085245)     <boot dev='hd'/>
	I0729 18:46:15.434383  139184 main.go:141] libmachine: (custom-flannel-085245)     <bootmenu enable='no'/>
	I0729 18:46:15.434396  139184 main.go:141] libmachine: (custom-flannel-085245)   </os>
	I0729 18:46:15.434420  139184 main.go:141] libmachine: (custom-flannel-085245)   <devices>
	I0729 18:46:15.434444  139184 main.go:141] libmachine: (custom-flannel-085245)     <disk type='file' device='cdrom'>
	I0729 18:46:15.434471  139184 main.go:141] libmachine: (custom-flannel-085245)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/custom-flannel-085245/boot2docker.iso'/>
	I0729 18:46:15.434483  139184 main.go:141] libmachine: (custom-flannel-085245)       <target dev='hdc' bus='scsi'/>
	I0729 18:46:15.434492  139184 main.go:141] libmachine: (custom-flannel-085245)       <readonly/>
	I0729 18:46:15.434502  139184 main.go:141] libmachine: (custom-flannel-085245)     </disk>
	I0729 18:46:15.434548  139184 main.go:141] libmachine: (custom-flannel-085245)     <disk type='file' device='disk'>
	I0729 18:46:15.434576  139184 main.go:141] libmachine: (custom-flannel-085245)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:46:15.434595  139184 main.go:141] libmachine: (custom-flannel-085245)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/custom-flannel-085245/custom-flannel-085245.rawdisk'/>
	I0729 18:46:15.434607  139184 main.go:141] libmachine: (custom-flannel-085245)       <target dev='hda' bus='virtio'/>
	I0729 18:46:15.434624  139184 main.go:141] libmachine: (custom-flannel-085245)     </disk>
	I0729 18:46:15.434635  139184 main.go:141] libmachine: (custom-flannel-085245)     <interface type='network'>
	I0729 18:46:15.434650  139184 main.go:141] libmachine: (custom-flannel-085245)       <source network='mk-custom-flannel-085245'/>
	I0729 18:46:15.434662  139184 main.go:141] libmachine: (custom-flannel-085245)       <model type='virtio'/>
	I0729 18:46:15.434672  139184 main.go:141] libmachine: (custom-flannel-085245)     </interface>
	I0729 18:46:15.434686  139184 main.go:141] libmachine: (custom-flannel-085245)     <interface type='network'>
	I0729 18:46:15.434707  139184 main.go:141] libmachine: (custom-flannel-085245)       <source network='default'/>
	I0729 18:46:15.434720  139184 main.go:141] libmachine: (custom-flannel-085245)       <model type='virtio'/>
	I0729 18:46:15.434730  139184 main.go:141] libmachine: (custom-flannel-085245)     </interface>
	I0729 18:46:15.434739  139184 main.go:141] libmachine: (custom-flannel-085245)     <serial type='pty'>
	I0729 18:46:15.434754  139184 main.go:141] libmachine: (custom-flannel-085245)       <target port='0'/>
	I0729 18:46:15.434770  139184 main.go:141] libmachine: (custom-flannel-085245)     </serial>
	I0729 18:46:15.434779  139184 main.go:141] libmachine: (custom-flannel-085245)     <console type='pty'>
	I0729 18:46:15.434789  139184 main.go:141] libmachine: (custom-flannel-085245)       <target type='serial' port='0'/>
	I0729 18:46:15.434796  139184 main.go:141] libmachine: (custom-flannel-085245)     </console>
	I0729 18:46:15.434808  139184 main.go:141] libmachine: (custom-flannel-085245)     <rng model='virtio'>
	I0729 18:46:15.434825  139184 main.go:141] libmachine: (custom-flannel-085245)       <backend model='random'>/dev/random</backend>
	I0729 18:46:15.434840  139184 main.go:141] libmachine: (custom-flannel-085245)     </rng>
	I0729 18:46:15.434851  139184 main.go:141] libmachine: (custom-flannel-085245)     
	I0729 18:46:15.434859  139184 main.go:141] libmachine: (custom-flannel-085245)     
	I0729 18:46:15.434870  139184 main.go:141] libmachine: (custom-flannel-085245)   </devices>
	I0729 18:46:15.434879  139184 main.go:141] libmachine: (custom-flannel-085245) </domain>
	I0729 18:46:15.434888  139184 main.go:141] libmachine: (custom-flannel-085245) 
	I0729 18:46:15.439037  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | domain custom-flannel-085245 has defined MAC address 52:54:00:e2:4d:46 in network default
	I0729 18:46:15.439701  139184 main.go:141] libmachine: (custom-flannel-085245) Ensuring networks are active...
	I0729 18:46:15.439734  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | domain custom-flannel-085245 has defined MAC address 52:54:00:5b:39:1e in network mk-custom-flannel-085245
	I0729 18:46:15.440474  139184 main.go:141] libmachine: (custom-flannel-085245) Ensuring network default is active
	I0729 18:46:15.440790  139184 main.go:141] libmachine: (custom-flannel-085245) Ensuring network mk-custom-flannel-085245 is active
	I0729 18:46:15.441535  139184 main.go:141] libmachine: (custom-flannel-085245) Getting domain xml...
	I0729 18:46:15.442218  139184 main.go:141] libmachine: (custom-flannel-085245) Creating domain...
	I0729 18:46:15.845388  139184 main.go:141] libmachine: (custom-flannel-085245) Waiting to get IP...
	I0729 18:46:15.846455  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | domain custom-flannel-085245 has defined MAC address 52:54:00:5b:39:1e in network mk-custom-flannel-085245
	I0729 18:46:15.847019  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | unable to find current IP address of domain custom-flannel-085245 in network mk-custom-flannel-085245
	I0729 18:46:15.847053  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:15.846991  139207 retry.go:31] will retry after 189.792736ms: waiting for machine to come up
	I0729 18:46:16.038591  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | domain custom-flannel-085245 has defined MAC address 52:54:00:5b:39:1e in network mk-custom-flannel-085245
	I0729 18:46:16.039198  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | unable to find current IP address of domain custom-flannel-085245 in network mk-custom-flannel-085245
	I0729 18:46:16.039243  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:16.039170  139207 retry.go:31] will retry after 380.389032ms: waiting for machine to come up
	I0729 18:46:16.420996  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | domain custom-flannel-085245 has defined MAC address 52:54:00:5b:39:1e in network mk-custom-flannel-085245
	I0729 18:46:16.421735  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | unable to find current IP address of domain custom-flannel-085245 in network mk-custom-flannel-085245
	I0729 18:46:16.421798  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:16.421562  139207 retry.go:31] will retry after 368.934523ms: waiting for machine to come up
	I0729 18:46:16.792248  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | domain custom-flannel-085245 has defined MAC address 52:54:00:5b:39:1e in network mk-custom-flannel-085245
	I0729 18:46:16.792847  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | unable to find current IP address of domain custom-flannel-085245 in network mk-custom-flannel-085245
	I0729 18:46:16.792894  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:16.792788  139207 retry.go:31] will retry after 367.552669ms: waiting for machine to come up
	I0729 18:46:17.162598  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | domain custom-flannel-085245 has defined MAC address 52:54:00:5b:39:1e in network mk-custom-flannel-085245
	I0729 18:46:17.163128  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | unable to find current IP address of domain custom-flannel-085245 in network mk-custom-flannel-085245
	I0729 18:46:17.163157  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:17.163071  139207 retry.go:31] will retry after 507.18208ms: waiting for machine to come up
	I0729 18:46:17.671866  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | domain custom-flannel-085245 has defined MAC address 52:54:00:5b:39:1e in network mk-custom-flannel-085245
	I0729 18:46:17.672420  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | unable to find current IP address of domain custom-flannel-085245 in network mk-custom-flannel-085245
	I0729 18:46:17.672448  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:17.672336  139207 retry.go:31] will retry after 716.047239ms: waiting for machine to come up
	I0729 18:46:18.390069  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | domain custom-flannel-085245 has defined MAC address 52:54:00:5b:39:1e in network mk-custom-flannel-085245
	I0729 18:46:18.390596  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | unable to find current IP address of domain custom-flannel-085245 in network mk-custom-flannel-085245
	I0729 18:46:18.390630  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:18.390543  139207 retry.go:31] will retry after 1.085576439s: waiting for machine to come up
	I0729 18:46:19.477745  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | domain custom-flannel-085245 has defined MAC address 52:54:00:5b:39:1e in network mk-custom-flannel-085245
	I0729 18:46:19.478333  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | unable to find current IP address of domain custom-flannel-085245 in network mk-custom-flannel-085245
	I0729 18:46:19.478361  139184 main.go:141] libmachine: (custom-flannel-085245) DBG | I0729 18:46:19.478286  139207 retry.go:31] will retry after 1.189800088s: waiting for machine to come up
	I0729 18:46:16.042620  137707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:46:16.060548  137707 api_server.go:72] duration metric: took 1.018712682s to wait for apiserver process to appear ...
	I0729 18:46:16.060579  137707 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:46:16.060604  137707 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0729 18:46:18.471825  137707 api_server.go:279] https://192.168.72.224:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:46:18.471854  137707 api_server.go:103] status: https://192.168.72.224:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:46:18.471869  137707 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0729 18:46:18.513772  137707 api_server.go:279] https://192.168.72.224:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 18:46:18.513807  137707 api_server.go:103] status: https://192.168.72.224:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 18:46:18.560939  137707 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0729 18:46:18.566151  137707 api_server.go:279] https://192.168.72.224:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:46:18.566178  137707 api_server.go:103] status: https://192.168.72.224:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:46:19.060722  137707 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0729 18:46:19.070498  137707 api_server.go:279] https://192.168.72.224:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:46:19.070531  137707 api_server.go:103] status: https://192.168.72.224:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:46:19.561267  137707 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0729 18:46:19.566758  137707 api_server.go:279] https://192.168.72.224:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 18:46:19.566784  137707 api_server.go:103] status: https://192.168.72.224:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 18:46:20.060941  137707 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0729 18:46:20.065708  137707 api_server.go:279] https://192.168.72.224:8443/healthz returned 200:
	ok
	I0729 18:46:20.073296  137707 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:46:20.073321  137707 api_server.go:131] duration metric: took 4.012735267s to wait for apiserver health ...
	I0729 18:46:20.073330  137707 cni.go:84] Creating CNI manager for ""
	I0729 18:46:20.073337  137707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:46:20.075186  137707 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:46:15.727148  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:16.227408  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:16.726630  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:17.226984  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:17.726709  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:18.226485  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:18.726917  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:19.226443  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:19.727197  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:20.226523  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:20.076467  137707 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:46:20.087623  137707 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:46:20.108564  137707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:46:20.126476  137707 system_pods.go:59] 8 kube-system pods found
	I0729 18:46:20.126508  137707 system_pods.go:61] "coredns-5cfdc65f69-fcx4p" [1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:46:20.126514  137707 system_pods.go:61] "coredns-5cfdc65f69-jh9jx" [39c91d68-14ab-484a-9019-21ba96949987] Running
	I0729 18:46:20.126522  137707 system_pods.go:61] "etcd-kubernetes-upgrade-695907" [26057da4-74d4-490c-b64a-50a566619c7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:46:20.126528  137707 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-695907" [bd317338-a109-4064-a445-4fe680b813b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:46:20.126536  137707 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-695907" [3909dc3b-b7a3-4772-8dc8-4993d6c2968f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:46:20.126543  137707 system_pods.go:61] "kube-proxy-qbrql" [cf5f419f-32ac-4dad-9f03-e878e315d9c2] Running
	I0729 18:46:20.126550  137707 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-695907" [c8de7eff-6c8e-472b-ae3d-ac68e3c67fd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:46:20.126555  137707 system_pods.go:61] "storage-provisioner" [1740f978-1e5b-48bd-89e8-9396ae604d7c] Running
	I0729 18:46:20.126564  137707 system_pods.go:74] duration metric: took 17.976319ms to wait for pod list to return data ...
	I0729 18:46:20.126578  137707 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:46:20.131167  137707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:46:20.131191  137707 node_conditions.go:123] node cpu capacity is 2
	I0729 18:46:20.131202  137707 node_conditions.go:105] duration metric: took 4.6191ms to run NodePressure ...
	I0729 18:46:20.131219  137707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:46:20.457630  137707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:46:20.471935  137707 ops.go:34] apiserver oom_adj: -16
	I0729 18:46:20.471961  137707 kubeadm.go:597] duration metric: took 11.204668802s to restartPrimaryControlPlane
	I0729 18:46:20.471973  137707 kubeadm.go:394] duration metric: took 11.502763251s to StartCluster
	I0729 18:46:20.471995  137707 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:20.472075  137707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:46:20.473380  137707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:20.473648  137707 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.224 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:46:20.473728  137707 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:46:20.473814  137707 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-695907"
	I0729 18:46:20.473843  137707 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-695907"
	I0729 18:46:20.473883  137707 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-695907"
	I0729 18:46:20.473926  137707 config.go:182] Loaded profile config "kubernetes-upgrade-695907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:46:20.473851  137707 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-695907"
	W0729 18:46:20.473965  137707 addons.go:243] addon storage-provisioner should already be in state true
	I0729 18:46:20.474013  137707 host.go:66] Checking if "kubernetes-upgrade-695907" exists ...
	I0729 18:46:20.474312  137707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:20.474347  137707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:20.474541  137707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:20.474577  137707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:20.475326  137707 out.go:177] * Verifying Kubernetes components...
	I0729 18:46:20.476516  137707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:46:20.491363  137707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0729 18:46:20.491947  137707 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:20.492621  137707 main.go:141] libmachine: Using API Version  1
	I0729 18:46:20.492650  137707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:20.493247  137707 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:20.493678  137707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:20.493699  137707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:20.493768  137707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I0729 18:46:20.494252  137707 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:20.494810  137707 main.go:141] libmachine: Using API Version  1
	I0729 18:46:20.494837  137707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:20.495173  137707 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:20.495391  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetState
	I0729 18:46:20.498737  137707 kapi.go:59] client config for kubernetes-upgrade-695907: &rest.Config{Host:"https://192.168.72.224:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/client.crt", KeyFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kubernetes-upgrade-695907/client.key", CAFile:"/home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 18:46:20.499085  137707 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-695907"
	W0729 18:46:20.499100  137707 addons.go:243] addon default-storageclass should already be in state true
	I0729 18:46:20.499131  137707 host.go:66] Checking if "kubernetes-upgrade-695907" exists ...
	I0729 18:46:20.499498  137707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:20.499533  137707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:20.511273  137707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39441
	I0729 18:46:20.511869  137707 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:20.512412  137707 main.go:141] libmachine: Using API Version  1
	I0729 18:46:20.512439  137707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:20.512781  137707 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:20.513037  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetState
	I0729 18:46:20.514890  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .DriverName
	I0729 18:46:20.517051  137707 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:46:20.518541  137707 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:46:20.518561  137707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:46:20.518587  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:46:20.519647  137707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I0729 18:46:20.520048  137707 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:20.520594  137707 main.go:141] libmachine: Using API Version  1
	I0729 18:46:20.520618  137707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:20.521189  137707 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:20.521903  137707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:20.521951  137707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:20.522688  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:46:20.523275  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:46:20.523304  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:46:20.523489  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:46:20.523696  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:46:20.523887  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:46:20.524053  137707 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/id_rsa Username:docker}
	I0729 18:46:20.543159  137707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0729 18:46:20.543706  137707 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:20.544313  137707 main.go:141] libmachine: Using API Version  1
	I0729 18:46:20.544342  137707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:20.544749  137707 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:20.544967  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetState
	I0729 18:46:20.547112  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .DriverName
	I0729 18:46:20.547382  137707 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:46:20.547401  137707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:46:20.547420  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHHostname
	I0729 18:46:20.550946  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:46:20.551450  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:91:a2", ip: ""} in network mk-kubernetes-upgrade-695907: {Iface:virbr2 ExpiryTime:2024-07-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:8f:91:a2 Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:kubernetes-upgrade-695907 Clientid:01:52:54:00:8f:91:a2}
	I0729 18:46:20.551480  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | domain kubernetes-upgrade-695907 has defined IP address 192.168.72.224 and MAC address 52:54:00:8f:91:a2 in network mk-kubernetes-upgrade-695907
	I0729 18:46:20.551659  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHPort
	I0729 18:46:20.551885  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHKeyPath
	I0729 18:46:20.552075  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .GetSSHUsername
	I0729 18:46:20.552219  137707 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/kubernetes-upgrade-695907/id_rsa Username:docker}
	I0729 18:46:20.726443  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:21.227191  137193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:46:21.331617  137193 kubeadm.go:1113] duration metric: took 11.32367143s to wait for elevateKubeSystemPrivileges
	I0729 18:46:21.331654  137193 kubeadm.go:394] duration metric: took 22.954434177s to StartCluster
	I0729 18:46:21.331677  137193 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:21.331765  137193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:46:21.333046  137193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:46:21.333301  137193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 18:46:21.333308  137193 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.157 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:46:21.333398  137193 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:46:21.333473  137193 addons.go:69] Setting storage-provisioner=true in profile "kindnet-085245"
	I0729 18:46:21.333491  137193 config.go:182] Loaded profile config "kindnet-085245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:46:21.333524  137193 addons.go:234] Setting addon storage-provisioner=true in "kindnet-085245"
	I0729 18:46:21.333538  137193 addons.go:69] Setting default-storageclass=true in profile "kindnet-085245"
	I0729 18:46:21.333560  137193 host.go:66] Checking if "kindnet-085245" exists ...
	I0729 18:46:21.333561  137193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-085245"
	I0729 18:46:21.334003  137193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:21.334005  137193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:21.334024  137193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:21.334031  137193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:21.334957  137193 out.go:177] * Verifying Kubernetes components...
	I0729 18:46:21.336370  137193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:46:21.350788  137193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36193
	I0729 18:46:21.351312  137193 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:21.351923  137193 main.go:141] libmachine: Using API Version  1
	I0729 18:46:21.351946  137193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:21.352301  137193 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:21.352853  137193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:21.352910  137193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:21.353835  137193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0729 18:46:21.354220  137193 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:21.354712  137193 main.go:141] libmachine: Using API Version  1
	I0729 18:46:21.354734  137193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:21.355045  137193 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:21.355204  137193 main.go:141] libmachine: (kindnet-085245) Calling .GetState
	I0729 18:46:21.358735  137193 addons.go:234] Setting addon default-storageclass=true in "kindnet-085245"
	I0729 18:46:21.358783  137193 host.go:66] Checking if "kindnet-085245" exists ...
	I0729 18:46:21.359161  137193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:21.359209  137193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:21.375151  137193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43861
	I0729 18:46:21.375837  137193 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:21.376473  137193 main.go:141] libmachine: Using API Version  1
	I0729 18:46:21.376491  137193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:21.376846  137193 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:21.376916  137193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0729 18:46:21.377442  137193 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:46:21.377560  137193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:46:21.377589  137193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:46:21.381414  137193 main.go:141] libmachine: Using API Version  1
	I0729 18:46:21.381433  137193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:46:21.381830  137193 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:46:21.382017  137193 main.go:141] libmachine: (kindnet-085245) Calling .GetState
	I0729 18:46:21.384127  137193 main.go:141] libmachine: (kindnet-085245) Calling .DriverName
	I0729 18:46:21.386313  137193 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:46:20.737342  137707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:46:20.764956  137707 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:46:20.765055  137707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:46:20.786296  137707 api_server.go:72] duration metric: took 312.609845ms to wait for apiserver process to appear ...
	I0729 18:46:20.786326  137707 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:46:20.786348  137707 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0729 18:46:20.794785  137707 api_server.go:279] https://192.168.72.224:8443/healthz returned 200:
	ok
	I0729 18:46:20.795794  137707 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 18:46:20.795874  137707 api_server.go:131] duration metric: took 9.534248ms to wait for apiserver health ...
	I0729 18:46:20.795899  137707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:46:20.803770  137707 system_pods.go:59] 8 kube-system pods found
	I0729 18:46:20.803802  137707 system_pods.go:61] "coredns-5cfdc65f69-fcx4p" [1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:46:20.803810  137707 system_pods.go:61] "coredns-5cfdc65f69-jh9jx" [39c91d68-14ab-484a-9019-21ba96949987] Running
	I0729 18:46:20.803821  137707 system_pods.go:61] "etcd-kubernetes-upgrade-695907" [26057da4-74d4-490c-b64a-50a566619c7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:46:20.803832  137707 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-695907" [bd317338-a109-4064-a445-4fe680b813b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:46:20.803846  137707 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-695907" [3909dc3b-b7a3-4772-8dc8-4993d6c2968f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:46:20.803855  137707 system_pods.go:61] "kube-proxy-qbrql" [cf5f419f-32ac-4dad-9f03-e878e315d9c2] Running
	I0729 18:46:20.803866  137707 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-695907" [c8de7eff-6c8e-472b-ae3d-ac68e3c67fd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:46:20.803874  137707 system_pods.go:61] "storage-provisioner" [1740f978-1e5b-48bd-89e8-9396ae604d7c] Running
	I0729 18:46:20.803882  137707 system_pods.go:74] duration metric: took 7.966667ms to wait for pod list to return data ...
	I0729 18:46:20.803897  137707 kubeadm.go:582] duration metric: took 330.216308ms to wait for: map[apiserver:true system_pods:true]
	I0729 18:46:20.803915  137707 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:46:20.822726  137707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:46:20.822783  137707 node_conditions.go:123] node cpu capacity is 2
	I0729 18:46:20.822799  137707 node_conditions.go:105] duration metric: took 18.873859ms to run NodePressure ...
	I0729 18:46:20.822814  137707 start.go:241] waiting for startup goroutines ...
	I0729 18:46:20.973237  137707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:46:20.975851  137707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:46:21.862325  137707 main.go:141] libmachine: Making call to close driver server
	I0729 18:46:21.862366  137707 main.go:141] libmachine: Making call to close driver server
	I0729 18:46:21.862386  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .Close
	I0729 18:46:21.862445  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .Close
	I0729 18:46:21.864463  137707 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:46:21.864481  137707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:46:21.864494  137707 main.go:141] libmachine: Making call to close driver server
	I0729 18:46:21.864489  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Closing plugin on server side
	I0729 18:46:21.864504  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .Close
	I0729 18:46:21.864543  137707 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:46:21.864551  137707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:46:21.864559  137707 main.go:141] libmachine: Making call to close driver server
	I0729 18:46:21.864566  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .Close
	I0729 18:46:21.864655  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Closing plugin on server side
	I0729 18:46:21.864772  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) DBG | Closing plugin on server side
	I0729 18:46:21.864764  137707 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:46:21.864836  137707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:46:21.865078  137707 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:46:21.865092  137707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:46:21.876294  137707 main.go:141] libmachine: Making call to close driver server
	I0729 18:46:21.876329  137707 main.go:141] libmachine: (kubernetes-upgrade-695907) Calling .Close
	I0729 18:46:21.876611  137707 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:46:21.876628  137707 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:46:21.878137  137707 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 18:46:21.879618  137707 addons.go:510] duration metric: took 1.405904753s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 18:46:21.879655  137707 start.go:246] waiting for cluster config update ...
	I0729 18:46:21.879669  137707 start.go:255] writing updated cluster config ...
	I0729 18:46:21.879917  137707 ssh_runner.go:195] Run: rm -f paused
	I0729 18:46:21.944597  137707 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 18:46:21.946583  137707 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-695907" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.892174951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=748d96f6-3672-4167-a469-12363782dc3a name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.894024444Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e9e5f9c-c602-47e0-93c0-53e0f344f00d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.894634916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278782894608923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e9e5f9c-c602-47e0-93c0-53e0f344f00d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.895196787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bb72939-a427-4f39-992f-b2e3518b0fd4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.895258829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bb72939-a427-4f39-992f-b2e3518b0fd4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.896175544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1826ae9cf3fe569d0974717999bfeccb24412743abbe76350f9463eecfc0294,PodSandboxId:dc04729e9e885dea4493a0729c4cadfd2b93eccdbdf0bdb5ef9ef7792a7a44a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278779286930878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fcx4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a79fa63ad6a97857d133e78e36da08bad6830f3df18ce0fad61fdf4e01db2c,PodSandboxId:13ac989e39938428b100794d0d4bda647a4ee14e1097e2e48beb97de849a5390,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722278775488220309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e6e531a801deae9292a4f878009b8b89,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904cfee5e6ac46a5d1267ff663f1941dc64504d124b63e510087962be14c53e1,PodSandboxId:4a5c9936201ba2888a06eb81f87a06572ff9e6edd7018796dfc4600b42a83c74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722278775498535813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 7fcf08d5435948c86eafc71c73112649,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a0f71d373c4c343e018c7c5f7936b66651ffd491856fadca35492ef4339835,PodSandboxId:cd840d94d3ab3435c4a30dd8195d0b59141342802eab3c0d44eb5e1eb367967a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722278775430461544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-695907,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f263ce08b05791d184e5e434d3f88c0b,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68c7bf1e697231f012ebd89674b4a7d041e5e16322d7262cfc7442575d538b6,PodSandboxId:2c7c491beadf21b7df756b8fb8ca94694700be12583d33cf5e75e1479c9d3e5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722278775438060730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 7ad955147cd455f82c920bffd3eeeba1,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d6b543d688d706eb72e4177c1d14b74ec1d4dd5d53e69301fe62db4ab69a8c,PodSandboxId:a7546916ef9ae4e34fb1eb0cfb2aa6932e90e7200293614a06296f78f49780ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722278768456132408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbrql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5f419
f-32ac-4dad-9f03-e878e315d9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c8dd772f540534cd8f8a31356bb6c4ef7578dfc879ac708de8278e063c67f2,PodSandboxId:5137d643ea9abfe65f9825a146e6eac559bafd73a7d7d6a6160a23c5e1914033,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278769479130291,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jh9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c91d68-14ab-484a-9019-21ba96949987,}
,Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a209406878b05e8716f58fb6846887d5739ad5c72b320fd5638fb588b8d4fd4,PodSandboxId:b54bcc3f09d5bf13183e13f9f15c2efe561901a48b103d5071ba4247efaf15a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278768200192319,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740f978-1e5b-48bd-89e8-9396ae604d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024cd52bb30891e372259703e9491bd7d40ff69c57285ce45b27083eaa249ddc,PodSandboxId:dc04729e9e885dea4493a0729c4cadfd2b93eccdbdf0bdb5ef9ef7792a7a44a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278769191725574,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fcx4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a419b5f089f45d260a88ea38deffe9f1ea10b7afbdc794b14cc838c067fb724,PodSandboxId:2c7c491beadf21b7df756b8fb8ca94694700be12583d33cf5e75e1479c9d3e5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722278768227647296,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad955147cd455f82c920bffd3eeeba1,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c5a4a675df15f692c1ed0845b1bab4833cc12a8936ff55a35fb021408038ea,PodSandboxId:cd840d94d3ab3435c4a30dd8195d0b59141342802eab3c0d44eb5e1eb367967a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722278767946874722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f263ce08b05791d184e5e434d3f88c0b,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1140dcc277cb6f875d67b654c1bece822c33f23b5093841bf57f711c228b5e0,PodSandboxId:4a5c9936201ba2888a06eb81f87a06572ff9e6edd7018796dfc4600b42a83c74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722278768022860423,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fcf08d5435948c86eafc71c73112649,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909fd73182bfda0fb01dd5624aeebe8b58f9a6b4486c434405b03201468dab2c,PodSandboxId:13ac989e39938428b100794d0d4bda647a4ee14e1097e2e48beb97de849a5390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722278767767137556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e531a801deae9292a4f878009b8b89,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ad1e619be9c1a2a13315125eee3c0d41796503789b4374c7d882524a9d850e,PodSandboxId:df03e6c015fcc5b79002bd62ecb2301d762fdddb2f270390709abb8f37755bc4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278750036841356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740f978-1e5b-48bd-89e8-9396ae604d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c60b7eab5389b4a9b79767791f16e6020c85c28677bef23208730e1ab1778c,PodSandboxId:396cc636c81d71a080bf7080da0a797ee439750b340d28ba9a4d7ba11f8fcc0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278749963682022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jh9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c91d68-14ab-484a-9019-21ba96949987,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7928ec8dc264f672784e6e23cec1aabf2a31872d2a73653732ca8a868b4934,PodSandboxId:fb80e580405d0a759abfc724a08b6ff44eb9fec564e34befb286bf7e535bce72,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722278749521226410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbrql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5f419f-32ac-4dad-9f03-e878e315d9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2bb72939-a427-4f39-992f-b2e3518b0fd4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.945858441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb0e8682-305e-4b9c-a5c6-b9d581bcba83 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.945956416Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb0e8682-305e-4b9c-a5c6-b9d581bcba83 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.947711391Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef72b758-4a70-47a2-b827-e750a7ef1a6c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.948082639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278782948059826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef72b758-4a70-47a2-b827-e750a7ef1a6c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.948788799Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=696d582e-b32f-4674-a82f-cf7957cd5d1f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.948874360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=696d582e-b32f-4674-a82f-cf7957cd5d1f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.949247955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1826ae9cf3fe569d0974717999bfeccb24412743abbe76350f9463eecfc0294,PodSandboxId:dc04729e9e885dea4493a0729c4cadfd2b93eccdbdf0bdb5ef9ef7792a7a44a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278779286930878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fcx4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a79fa63ad6a97857d133e78e36da08bad6830f3df18ce0fad61fdf4e01db2c,PodSandboxId:13ac989e39938428b100794d0d4bda647a4ee14e1097e2e48beb97de849a5390,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722278775488220309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e6e531a801deae9292a4f878009b8b89,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904cfee5e6ac46a5d1267ff663f1941dc64504d124b63e510087962be14c53e1,PodSandboxId:4a5c9936201ba2888a06eb81f87a06572ff9e6edd7018796dfc4600b42a83c74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722278775498535813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 7fcf08d5435948c86eafc71c73112649,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a0f71d373c4c343e018c7c5f7936b66651ffd491856fadca35492ef4339835,PodSandboxId:cd840d94d3ab3435c4a30dd8195d0b59141342802eab3c0d44eb5e1eb367967a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722278775430461544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-695907,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f263ce08b05791d184e5e434d3f88c0b,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68c7bf1e697231f012ebd89674b4a7d041e5e16322d7262cfc7442575d538b6,PodSandboxId:2c7c491beadf21b7df756b8fb8ca94694700be12583d33cf5e75e1479c9d3e5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722278775438060730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 7ad955147cd455f82c920bffd3eeeba1,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d6b543d688d706eb72e4177c1d14b74ec1d4dd5d53e69301fe62db4ab69a8c,PodSandboxId:a7546916ef9ae4e34fb1eb0cfb2aa6932e90e7200293614a06296f78f49780ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722278768456132408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbrql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5f419
f-32ac-4dad-9f03-e878e315d9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c8dd772f540534cd8f8a31356bb6c4ef7578dfc879ac708de8278e063c67f2,PodSandboxId:5137d643ea9abfe65f9825a146e6eac559bafd73a7d7d6a6160a23c5e1914033,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278769479130291,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jh9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c91d68-14ab-484a-9019-21ba96949987,}
,Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a209406878b05e8716f58fb6846887d5739ad5c72b320fd5638fb588b8d4fd4,PodSandboxId:b54bcc3f09d5bf13183e13f9f15c2efe561901a48b103d5071ba4247efaf15a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278768200192319,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740f978-1e5b-48bd-89e8-9396ae604d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024cd52bb30891e372259703e9491bd7d40ff69c57285ce45b27083eaa249ddc,PodSandboxId:dc04729e9e885dea4493a0729c4cadfd2b93eccdbdf0bdb5ef9ef7792a7a44a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278769191725574,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fcx4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a419b5f089f45d260a88ea38deffe9f1ea10b7afbdc794b14cc838c067fb724,PodSandboxId:2c7c491beadf21b7df756b8fb8ca94694700be12583d33cf5e75e1479c9d3e5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722278768227647296,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad955147cd455f82c920bffd3eeeba1,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c5a4a675df15f692c1ed0845b1bab4833cc12a8936ff55a35fb021408038ea,PodSandboxId:cd840d94d3ab3435c4a30dd8195d0b59141342802eab3c0d44eb5e1eb367967a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722278767946874722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f263ce08b05791d184e5e434d3f88c0b,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1140dcc277cb6f875d67b654c1bece822c33f23b5093841bf57f711c228b5e0,PodSandboxId:4a5c9936201ba2888a06eb81f87a06572ff9e6edd7018796dfc4600b42a83c74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722278768022860423,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fcf08d5435948c86eafc71c73112649,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909fd73182bfda0fb01dd5624aeebe8b58f9a6b4486c434405b03201468dab2c,PodSandboxId:13ac989e39938428b100794d0d4bda647a4ee14e1097e2e48beb97de849a5390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722278767767137556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e531a801deae9292a4f878009b8b89,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ad1e619be9c1a2a13315125eee3c0d41796503789b4374c7d882524a9d850e,PodSandboxId:df03e6c015fcc5b79002bd62ecb2301d762fdddb2f270390709abb8f37755bc4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278750036841356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740f978-1e5b-48bd-89e8-9396ae604d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c60b7eab5389b4a9b79767791f16e6020c85c28677bef23208730e1ab1778c,PodSandboxId:396cc636c81d71a080bf7080da0a797ee439750b340d28ba9a4d7ba11f8fcc0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278749963682022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jh9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c91d68-14ab-484a-9019-21ba96949987,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7928ec8dc264f672784e6e23cec1aabf2a31872d2a73653732ca8a868b4934,PodSandboxId:fb80e580405d0a759abfc724a08b6ff44eb9fec564e34befb286bf7e535bce72,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722278749521226410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbrql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5f419f-32ac-4dad-9f03-e878e315d9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=696d582e-b32f-4674-a82f-cf7957cd5d1f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.984228095Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d54b9ea-eeaf-4940-a1d3-2f1508be9e5b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.984585239Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dc04729e9e885dea4493a0729c4cadfd2b93eccdbdf0bdb5ef9ef7792a7a44a7,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-fcx4p,Uid:1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722278767923311645,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-fcx4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:45:49.125962807Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5137d643ea9abfe65f9825a146e6eac559bafd73a7d7d6a6160a23c5e1914033,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-jh9jx,Uid:39c91d68-14ab-484a-9019-21ba96949987,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722278767699274987,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-jh9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c91d68-14ab-484a-9019-21ba96949987,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:45:49.155282470Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7546916ef9ae4e34fb1eb0cfb2aa6932e90e7200293614a06296f78f49780ea,Metadata:&PodSandboxMetadata{Name:kube-proxy-qbrql,Uid:cf5f419f-32ac-4dad-9f03-e878e315d9c2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722278767514262766,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qbrql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5f419f-32ac-4dad-9f03-e878e315d9c2,k8s-app: kube-proxy,pod-template-generation: 1,},Annot
ations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:45:49.052198932Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b54bcc3f09d5bf13183e13f9f15c2efe561901a48b103d5071ba4247efaf15a1,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1740f978-1e5b-48bd-89e8-9396ae604d7c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722278767502948116,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740f978-1e5b-48bd-89e8-9396ae604d7c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"conta
iners\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T18:45:49.079057740Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2c7c491beadf21b7df756b8fb8ca94694700be12583d33cf5e75e1479c9d3e5b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-695907,Uid:7ad955147cd455f82c920bffd3eeeba1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722278767496969735,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad955147cd455f82c920bf
fd3eeeba1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.224:8443,kubernetes.io/config.hash: 7ad955147cd455f82c920bffd3eeeba1,kubernetes.io/config.seen: 2024-07-29T18:45:36.087534823Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:13ac989e39938428b100794d0d4bda647a4ee14e1097e2e48beb97de849a5390,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-695907,Uid:e6e531a801deae9292a4f878009b8b89,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722278767489304332,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e531a801deae9292a4f878009b8b89,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.224:2379,kubernetes.io/config.hash: e6e531a801deae9292a4f878009b8b89,kubernetes.io/config.s
een: 2024-07-29T18:45:36.145045816Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4a5c9936201ba2888a06eb81f87a06572ff9e6edd7018796dfc4600b42a83c74,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-695907,Uid:7fcf08d5435948c86eafc71c73112649,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722278767488929071,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fcf08d5435948c86eafc71c73112649,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7fcf08d5435948c86eafc71c73112649,kubernetes.io/config.seen: 2024-07-29T18:45:36.087533725Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd840d94d3ab3435c4a30dd8195d0b59141342802eab3c0d44eb5e1eb367967a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-695907,Uid:f263ce08b05791d184e
5e434d3f88c0b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722278767404928522,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f263ce08b05791d184e5e434d3f88c0b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f263ce08b05791d184e5e434d3f88c0b,kubernetes.io/config.seen: 2024-07-29T18:45:36.087530085Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0d54b9ea-eeaf-4940-a1d3-2f1508be9e5b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.985944512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8191355c-15da-411c-b870-0d8cdb92032c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.986024204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8191355c-15da-411c-b870-0d8cdb92032c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:22 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:22.986271249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1826ae9cf3fe569d0974717999bfeccb24412743abbe76350f9463eecfc0294,PodSandboxId:dc04729e9e885dea4493a0729c4cadfd2b93eccdbdf0bdb5ef9ef7792a7a44a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278779286930878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fcx4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a79fa63ad6a97857d133e78e36da08bad6830f3df18ce0fad61fdf4e01db2c,PodSandboxId:13ac989e39938428b100794d0d4bda647a4ee14e1097e2e48beb97de849a5390,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722278775488220309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e6e531a801deae9292a4f878009b8b89,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904cfee5e6ac46a5d1267ff663f1941dc64504d124b63e510087962be14c53e1,PodSandboxId:4a5c9936201ba2888a06eb81f87a06572ff9e6edd7018796dfc4600b42a83c74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722278775498535813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 7fcf08d5435948c86eafc71c73112649,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a0f71d373c4c343e018c7c5f7936b66651ffd491856fadca35492ef4339835,PodSandboxId:cd840d94d3ab3435c4a30dd8195d0b59141342802eab3c0d44eb5e1eb367967a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722278775430461544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-695907,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f263ce08b05791d184e5e434d3f88c0b,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68c7bf1e697231f012ebd89674b4a7d041e5e16322d7262cfc7442575d538b6,PodSandboxId:2c7c491beadf21b7df756b8fb8ca94694700be12583d33cf5e75e1479c9d3e5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722278775438060730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 7ad955147cd455f82c920bffd3eeeba1,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d6b543d688d706eb72e4177c1d14b74ec1d4dd5d53e69301fe62db4ab69a8c,PodSandboxId:a7546916ef9ae4e34fb1eb0cfb2aa6932e90e7200293614a06296f78f49780ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722278768456132408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbrql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5f419
f-32ac-4dad-9f03-e878e315d9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c8dd772f540534cd8f8a31356bb6c4ef7578dfc879ac708de8278e063c67f2,PodSandboxId:5137d643ea9abfe65f9825a146e6eac559bafd73a7d7d6a6160a23c5e1914033,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278769479130291,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jh9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c91d68-14ab-484a-9019-21ba96949987,}
,Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a209406878b05e8716f58fb6846887d5739ad5c72b320fd5638fb588b8d4fd4,PodSandboxId:b54bcc3f09d5bf13183e13f9f15c2efe561901a48b103d5071ba4247efaf15a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278768200192319,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740f978-1e5b-48bd-89e8-9396ae604d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8191355c-15da-411c-b870-0d8cdb92032c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:23 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:23.000590530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71b1e530-0c07-4c48-b42f-954e77faeb92 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:23 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:23.000685613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71b1e530-0c07-4c48-b42f-954e77faeb92 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:46:23 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:23.002599292Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=beaef0d9-4ca2-4f1c-9712-07ace04c52d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:23 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:23.003099145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278783003071299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=beaef0d9-4ca2-4f1c-9712-07ace04c52d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:46:23 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:23.003855705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35ea111e-6fd6-45a1-b072-021a28b2ec68 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:23 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:23.003929216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35ea111e-6fd6-45a1-b072-021a28b2ec68 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:46:23 kubernetes-upgrade-695907 crio[2275]: time="2024-07-29 18:46:23.004236662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1826ae9cf3fe569d0974717999bfeccb24412743abbe76350f9463eecfc0294,PodSandboxId:dc04729e9e885dea4493a0729c4cadfd2b93eccdbdf0bdb5ef9ef7792a7a44a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278779286930878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fcx4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a79fa63ad6a97857d133e78e36da08bad6830f3df18ce0fad61fdf4e01db2c,PodSandboxId:13ac989e39938428b100794d0d4bda647a4ee14e1097e2e48beb97de849a5390,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722278775488220309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e6e531a801deae9292a4f878009b8b89,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904cfee5e6ac46a5d1267ff663f1941dc64504d124b63e510087962be14c53e1,PodSandboxId:4a5c9936201ba2888a06eb81f87a06572ff9e6edd7018796dfc4600b42a83c74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722278775498535813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 7fcf08d5435948c86eafc71c73112649,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a0f71d373c4c343e018c7c5f7936b66651ffd491856fadca35492ef4339835,PodSandboxId:cd840d94d3ab3435c4a30dd8195d0b59141342802eab3c0d44eb5e1eb367967a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722278775430461544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-695907,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f263ce08b05791d184e5e434d3f88c0b,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68c7bf1e697231f012ebd89674b4a7d041e5e16322d7262cfc7442575d538b6,PodSandboxId:2c7c491beadf21b7df756b8fb8ca94694700be12583d33cf5e75e1479c9d3e5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722278775438060730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 7ad955147cd455f82c920bffd3eeeba1,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d6b543d688d706eb72e4177c1d14b74ec1d4dd5d53e69301fe62db4ab69a8c,PodSandboxId:a7546916ef9ae4e34fb1eb0cfb2aa6932e90e7200293614a06296f78f49780ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722278768456132408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbrql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5f419
f-32ac-4dad-9f03-e878e315d9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c8dd772f540534cd8f8a31356bb6c4ef7578dfc879ac708de8278e063c67f2,PodSandboxId:5137d643ea9abfe65f9825a146e6eac559bafd73a7d7d6a6160a23c5e1914033,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278769479130291,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jh9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c91d68-14ab-484a-9019-21ba96949987,}
,Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a209406878b05e8716f58fb6846887d5739ad5c72b320fd5638fb588b8d4fd4,PodSandboxId:b54bcc3f09d5bf13183e13f9f15c2efe561901a48b103d5071ba4247efaf15a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278768200192319,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740f978-1e5b-48bd-89e8-9396ae604d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024cd52bb30891e372259703e9491bd7d40ff69c57285ce45b27083eaa249ddc,PodSandboxId:dc04729e9e885dea4493a0729c4cadfd2b93eccdbdf0bdb5ef9ef7792a7a44a7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278769191725574,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-fcx4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3a75cd-5ef1-450b-93dc-c52dfd23bdbc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a419b5f089f45d260a88ea38deffe9f1ea10b7afbdc794b14cc838c067fb724,PodSandboxId:2c7c491beadf21b7df756b8fb8ca94694700be12583d33cf5e75e1479c9d3e5b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722278768227647296,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad955147cd455f82c920bffd3eeeba1,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c5a4a675df15f692c1ed0845b1bab4833cc12a8936ff55a35fb021408038ea,PodSandboxId:cd840d94d3ab3435c4a30dd8195d0b59141342802eab3c0d44eb5e1eb367967a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722278767946874722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f263ce08b05791d184e5e434d3f88c0b,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1140dcc277cb6f875d67b654c1bece822c33f23b5093841bf57f711c228b5e0,PodSandboxId:4a5c9936201ba2888a06eb81f87a06572ff9e6edd7018796dfc4600b42a83c74,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722278768022860423,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fcf08d5435948c86eafc71c73112649,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909fd73182bfda0fb01dd5624aeebe8b58f9a6b4486c434405b03201468dab2c,PodSandboxId:13ac989e39938428b100794d0d4bda647a4ee14e1097e2e48beb97de849a5390,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722278767767137556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-695907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e531a801deae9292a4f878009b8b89,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ad1e619be9c1a2a13315125eee3c0d41796503789b4374c7d882524a9d850e,PodSandboxId:df03e6c015fcc5b79002bd62ecb2301d762fdddb2f270390709abb8f37755bc4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278750036841356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1740f978-1e5b-48bd-89e8-9396ae604d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c60b7eab5389b4a9b79767791f16e6020c85c28677bef23208730e1ab1778c,PodSandboxId:396cc636c81d71a080bf7080da0a797ee439750b340d28ba9a4d7ba11f8fcc0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278749963682022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jh9jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39c91d68-14ab-484a-9019-21ba96949987,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e7928ec8dc264f672784e6e23cec1aabf2a31872d2a73653732ca8a868b4934,PodSandboxId:fb80e580405d0a759abfc724a08b6ff44eb9fec564e34befb286bf7e535bce72,Metadata:&ContainerMeta
data{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722278749521226410,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qbrql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5f419f-32ac-4dad-9f03-e878e315d9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35ea111e-6fd6-45a1-b072-021a28b2ec68 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e1826ae9cf3fe       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   dc04729e9e885       coredns-5cfdc65f69-fcx4p
	904cfee5e6ac4       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            2                   4a5c9936201ba       kube-scheduler-kubernetes-upgrade-695907
	f6a79fa63ad6a       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago       Running             etcd                      2                   13ac989e39938       etcd-kubernetes-upgrade-695907
	c68c7bf1e6972       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            2                   2c7c491beadf2       kube-apiserver-kubernetes-upgrade-695907
	f5a0f71d373c4       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   2                   cd840d94d3ab3       kube-controller-manager-kubernetes-upgrade-695907
	b9c8dd772f540       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 seconds ago      Running             coredns                   1                   5137d643ea9ab       coredns-5cfdc65f69-jh9jx
	024cd52bb3089       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 seconds ago      Exited              coredns                   1                   dc04729e9e885       coredns-5cfdc65f69-fcx4p
	91d6b543d688d       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 seconds ago      Running             kube-proxy                1                   a7546916ef9ae       kube-proxy-qbrql
	8a419b5f089f4       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 seconds ago      Exited              kube-apiserver            1                   2c7c491beadf2       kube-apiserver-kubernetes-upgrade-695907
	3a209406878b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   b54bcc3f09d5b       storage-provisioner
	a1140dcc277cb       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   15 seconds ago      Exited              kube-scheduler            1                   4a5c9936201ba       kube-scheduler-kubernetes-upgrade-695907
	a4c5a4a675df1       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   15 seconds ago      Exited              kube-controller-manager   1                   cd840d94d3ab3       kube-controller-manager-kubernetes-upgrade-695907
	909fd73182bfd       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   15 seconds ago      Exited              etcd                      1                   13ac989e39938       etcd-kubernetes-upgrade-695907
	84ad1e619be9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   33 seconds ago      Exited              storage-provisioner       0                   df03e6c015fcc       storage-provisioner
	17c60b7eab538       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   33 seconds ago      Exited              coredns                   0                   396cc636c81d7       coredns-5cfdc65f69-jh9jx
	6e7928ec8dc26       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   33 seconds ago      Exited              kube-proxy                0                   fb80e580405d0       kube-proxy-qbrql
	
	
	==> coredns [024cd52bb30891e372259703e9491bd7d40ff69c57285ce45b27083eaa249ddc] <==
	
	
	==> coredns [17c60b7eab5389b4a9b79767791f16e6020c85c28677bef23208730e1ab1778c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b9c8dd772f540534cd8f8a31356bb6c4ef7578dfc879ac708de8278e063c67f2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=380": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=380": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=379": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=379": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [e1826ae9cf3fe569d0974717999bfeccb24412743abbe76350f9463eecfc0294] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-695907
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-695907
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:45:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-695907
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:46:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:46:18 +0000   Mon, 29 Jul 2024 18:45:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:46:18 +0000   Mon, 29 Jul 2024 18:45:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:46:18 +0000   Mon, 29 Jul 2024 18:45:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:46:18 +0000   Mon, 29 Jul 2024 18:45:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.224
	  Hostname:    kubernetes-upgrade-695907
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 51003f2d8c404a02bc846248260d5cdf
	  System UUID:                51003f2d-8c40-4a02-bc84-6248260d5cdf
	  Boot ID:                    4b54b74f-d5ac-4b5c-9a9f-3a052fb6924c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-fcx4p                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     34s
	  kube-system                 coredns-5cfdc65f69-jh9jx                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     34s
	  kube-system                 etcd-kubernetes-upgrade-695907                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         39s
	  kube-system                 kube-apiserver-kubernetes-upgrade-695907             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-695907    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-proxy-qbrql                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                 kube-scheduler-kubernetes-upgrade-695907             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 33s                kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  46s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  46s (x8 over 47s)  kubelet          Node kubernetes-upgrade-695907 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 47s)  kubelet          Node kubernetes-upgrade-695907 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x7 over 47s)  kubelet          Node kubernetes-upgrade-695907 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34s                node-controller  Node kubernetes-upgrade-695907 event: Registered Node kubernetes-upgrade-695907 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-695907 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-695907 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-695907 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-695907 event: Registered Node kubernetes-upgrade-695907 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.200963] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.063670] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059316] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.190291] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.165068] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.278497] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +4.184996] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +2.041258] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.064208] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.788575] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	[  +0.097836] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.000011] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 18:46] systemd-fstab-generator[2194]: Ignoring "noauto" option for root device
	[  +0.083255] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.068511] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +0.191451] systemd-fstab-generator[2220]: Ignoring "noauto" option for root device
	[  +0.138470] systemd-fstab-generator[2232]: Ignoring "noauto" option for root device
	[  +0.303602] systemd-fstab-generator[2260]: Ignoring "noauto" option for root device
	[  +4.140729] systemd-fstab-generator[2415]: Ignoring "noauto" option for root device
	[  +0.577360] kauditd_printk_skb: 122 callbacks suppressed
	[  +7.222063] systemd-fstab-generator[3423]: Ignoring "noauto" option for root device
	[  +0.100799] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.773198] systemd-fstab-generator[3820]: Ignoring "noauto" option for root device
	[  +0.131629] kauditd_printk_skb: 52 callbacks suppressed
	
	
	==> etcd [909fd73182bfda0fb01dd5624aeebe8b58f9a6b4486c434405b03201468dab2c] <==
	{"level":"info","ts":"2024-07-29T18:46:08.835712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T18:46:08.835728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb received MsgPreVoteResp from a180d377c721c7fb at term 2"}
	{"level":"info","ts":"2024-07-29T18:46:08.83574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T18:46:08.835746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb received MsgVoteResp from a180d377c721c7fb at term 3"}
	{"level":"info","ts":"2024-07-29T18:46:08.835755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb became leader at term 3"}
	{"level":"info","ts":"2024-07-29T18:46:08.835762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a180d377c721c7fb elected leader a180d377c721c7fb at term 3"}
	{"level":"info","ts":"2024-07-29T18:46:08.842191Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a180d377c721c7fb","local-member-attributes":"{Name:kubernetes-upgrade-695907 ClientURLs:[https://192.168.72.224:2379]}","request-path":"/0/members/a180d377c721c7fb/attributes","cluster-id":"4666edb41ee6a2c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:46:08.842524Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:46:08.844089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:46:08.845786Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:46:08.84587Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T18:46:08.848716Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:46:08.855676Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:46:08.871583Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:46:08.874686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.224:2379"}
	{"level":"info","ts":"2024-07-29T18:46:13.175308Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T18:46:13.175395Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-695907","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.224:2380"],"advertise-client-urls":["https://192.168.72.224:2379"]}
	{"level":"warn","ts":"2024-07-29T18:46:13.175468Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.224:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T18:46:13.175495Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.224:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T18:46:13.175601Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T18:46:13.175611Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T18:46:13.17771Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a180d377c721c7fb","current-leader-member-id":"a180d377c721c7fb"}
	{"level":"info","ts":"2024-07-29T18:46:13.18177Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.72.224:2380"}
	{"level":"info","ts":"2024-07-29T18:46:13.181962Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.72.224:2380"}
	{"level":"info","ts":"2024-07-29T18:46:13.181991Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-695907","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.224:2380"],"advertise-client-urls":["https://192.168.72.224:2379"]}
	
	
	==> etcd [f6a79fa63ad6a97857d133e78e36da08bad6830f3df18ce0fad61fdf4e01db2c] <==
	{"level":"info","ts":"2024-07-29T18:46:15.941189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb switched to configuration voters=(11637533948520810491)"}
	{"level":"info","ts":"2024-07-29T18:46:15.947884Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4666edb41ee6a2c4","local-member-id":"a180d377c721c7fb","added-peer-id":"a180d377c721c7fb","added-peer-peer-urls":["https://192.168.72.224:2380"]}
	{"level":"info","ts":"2024-07-29T18:46:15.948019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4666edb41ee6a2c4","local-member-id":"a180d377c721c7fb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:46:15.948078Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:46:15.966278Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T18:46:15.972185Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.72.224:2380"}
	{"level":"info","ts":"2024-07-29T18:46:15.97241Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.224:2380"}
	{"level":"info","ts":"2024-07-29T18:46:15.975646Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a180d377c721c7fb","initial-advertise-peer-urls":["https://192.168.72.224:2380"],"listen-peer-urls":["https://192.168.72.224:2380"],"advertise-client-urls":["https://192.168.72.224:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.224:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T18:46:15.978436Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T18:46:16.893429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T18:46:16.893547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T18:46:16.893583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb received MsgPreVoteResp from a180d377c721c7fb at term 3"}
	{"level":"info","ts":"2024-07-29T18:46:16.893613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T18:46:16.893637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb received MsgVoteResp from a180d377c721c7fb at term 4"}
	{"level":"info","ts":"2024-07-29T18:46:16.893664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a180d377c721c7fb became leader at term 4"}
	{"level":"info","ts":"2024-07-29T18:46:16.893689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a180d377c721c7fb elected leader a180d377c721c7fb at term 4"}
	{"level":"info","ts":"2024-07-29T18:46:16.906582Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a180d377c721c7fb","local-member-attributes":"{Name:kubernetes-upgrade-695907 ClientURLs:[https://192.168.72.224:2379]}","request-path":"/0/members/a180d377c721c7fb/attributes","cluster-id":"4666edb41ee6a2c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:46:16.906731Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:46:16.90745Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:46:16.908209Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:46:16.912677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T18:46:16.908311Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T18:46:16.911412Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:46:16.91752Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T18:46:16.918412Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.224:2379"}
	
	
	==> kernel <==
	 18:46:23 up 1 min,  0 users,  load average: 1.60, 0.43, 0.15
	Linux kubernetes-upgrade-695907 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8a419b5f089f45d260a88ea38deffe9f1ea10b7afbdc794b14cc838c067fb724] <==
	E0729 18:46:12.654107       1 system_namespaces_controller.go:69] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	I0729 18:46:12.654135       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0729 18:46:12.654253       1 controller.go:124] Shutting down legacy_token_tracking_controller
	E0729 18:46:12.654961       1 controller.go:195] "Failed to update lease" err="Put \"https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-kltqbo6uvkrlk2nyithylxdq74\": context canceled"
	E0729 18:46:12.655102       1 controller.go:195] "Failed to update lease" err="context canceled"
	E0729 18:46:12.655170       1 controller.go:195] "Failed to update lease" err="context canceled"
	E0729 18:46:12.655235       1 controller.go:195] "Failed to update lease" err="context canceled"
	E0729 18:46:12.655299       1 controller.go:195] "Failed to update lease" err="context canceled"
	E0729 18:46:12.655324       1 controller.go:123] "Will retry updating lease" err="failed 5 attempts to update lease" interval="10s"
	I0729 18:46:12.655450       1 system_namespaces_controller.go:70] Shutting down system namespaces controller
	I0729 18:46:12.655869       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 18:46:12.655953       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 18:46:12.656009       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	F0729 18:46:12.656065       1 hooks.go:210] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	I0729 18:46:12.747230       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0729 18:46:12.747605       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0729 18:46:12.747687       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0729 18:46:12.747767       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 18:46:12.747798       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0729 18:46:12.747875       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 18:46:12.748491       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 18:46:12.751425       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0729 18:46:12.752032       1 gc_controller.go:85] Shutting down apiserver lease garbage collector
	I0729 18:46:12.752134       1 available_controller.go:432] Shutting down AvailableConditionController
	I0729 18:46:12.752278       1 cluster_authentication_trust_controller.go:451] Shutting down cluster_authentication_trust_controller controller
	
	
	==> kube-apiserver [c68c7bf1e697231f012ebd89674b4a7d041e5e16322d7262cfc7442575d538b6] <==
	I0729 18:46:18.522716       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 18:46:18.527830       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0729 18:46:18.527865       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0729 18:46:18.527947       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 18:46:18.528296       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 18:46:18.528537       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0729 18:46:18.555699       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 18:46:18.555817       1 aggregator.go:171] initial CRD sync complete...
	I0729 18:46:18.555852       1 autoregister_controller.go:144] Starting autoregister controller
	I0729 18:46:18.555881       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 18:46:18.555909       1 cache.go:39] Caches are synced for autoregister controller
	I0729 18:46:18.577459       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:46:18.577492       1 policy_source.go:224] refreshing policies
	I0729 18:46:18.577529       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 18:46:18.578857       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 18:46:18.593750       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 18:46:18.648228       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 18:46:19.414975       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 18:46:19.708977       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.224]
	I0729 18:46:19.718912       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 18:46:20.266624       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 18:46:20.281869       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 18:46:20.331840       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 18:46:20.428737       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 18:46:20.437621       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [a4c5a4a675df15f692c1ed0845b1bab4833cc12a8936ff55a35fb021408038ea] <==
	I0729 18:46:10.078648       1 serving.go:386] Generated self-signed cert in-memory
	I0729 18:46:11.104941       1 controllermanager.go:188] "Starting" version="v1.31.0-beta.0"
	I0729 18:46:11.104983       1 controllermanager.go:190] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:46:11.106609       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 18:46:11.106746       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 18:46:11.106813       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 18:46:11.106936       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [f5a0f71d373c4c343e018c7c5f7936b66651ffd491856fadca35492ef4339835] <==
	I0729 18:46:22.332483       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 18:46:22.354613       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 18:46:22.363709       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="31.099108ms"
	I0729 18:46:22.365680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="60.043µs"
	I0729 18:46:22.384443       1 shared_informer.go:320] Caches are synced for taint
	I0729 18:46:22.384707       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 18:46:22.384869       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-695907"
	I0729 18:46:22.384952       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 18:46:22.577322       1 shared_informer.go:320] Caches are synced for PV protection
	I0729 18:46:22.629939       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 18:46:22.701925       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 18:46:22.915820       1 shared_informer.go:320] Caches are synced for HPA
	I0729 18:46:22.978154       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 18:46:22.979714       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 18:46:23.009445       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 18:46:23.024891       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 18:46:23.028116       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 18:46:23.030437       1 shared_informer.go:320] Caches are synced for expand
	I0729 18:46:23.036175       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 18:46:23.046962       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 18:46:23.047041       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 18:46:23.059131       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:46:23.059257       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:46:23.087788       1 shared_informer.go:320] Caches are synced for service account
	I0729 18:46:23.092312       1 shared_informer.go:320] Caches are synced for namespace
	
	
	==> kube-proxy [6e7928ec8dc264f672784e6e23cec1aabf2a31872d2a73653732ca8a868b4934] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 18:45:49.973848       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 18:45:49.993139       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.224"]
	E0729 18:45:49.993418       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 18:45:50.124939       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 18:45:50.166919       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:45:50.172111       1 server_linux.go:170] "Using iptables Proxier"
	I0729 18:45:50.210832       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 18:45:50.212452       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 18:45:50.212717       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:45:50.216483       1 config.go:197] "Starting service config controller"
	I0729 18:45:50.216540       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:45:50.217773       1 config.go:104] "Starting endpoint slice config controller"
	I0729 18:45:50.217827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:45:50.227159       1 config.go:326] "Starting node config controller"
	I0729 18:45:50.228426       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:45:50.325411       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:45:50.329404       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:45:50.333199       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [91d6b543d688d706eb72e4177c1d14b74ec1d4dd5d53e69301fe62db4ab69a8c] <==
	I0729 18:46:12.794865       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:46:12.797554       1 config.go:197] "Starting service config controller"
	I0729 18:46:12.797616       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:46:12.797656       1 config.go:104] "Starting endpoint slice config controller"
	I0729 18:46:12.797726       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	W0729 18:46:12.798301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.72.224:8443: connect: connection refused
	E0729 18:46:12.798490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.224:8443: connect: connection refused" logger="UnhandledError"
	W0729 18:46:12.798598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.72.224:8443: connect: connection refused
	E0729 18:46:12.798667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.224:8443: connect: connection refused" logger="UnhandledError"
	I0729 18:46:12.799067       1 config.go:326] "Starting node config controller"
	I0729 18:46:12.799118       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0729 18:46:12.799217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-695907&limit=500&resourceVersion=0": dial tcp 192.168.72.224:8443: connect: connection refused
	E0729 18:46:12.799271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-695907&limit=500&resourceVersion=0\": dial tcp 192.168.72.224:8443: connect: connection refused" logger="UnhandledError"
	E0729 18:46:12.799720       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.72.224:8443: connect: connection refused"
	W0729 18:46:13.762515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-695907&limit=500&resourceVersion=0": dial tcp 192.168.72.224:8443: connect: connection refused
	E0729 18:46:13.762576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-695907&limit=500&resourceVersion=0\": dial tcp 192.168.72.224:8443: connect: connection refused" logger="UnhandledError"
	W0729 18:46:14.209215       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.72.224:8443: connect: connection refused
	E0729 18:46:14.209257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.224:8443: connect: connection refused" logger="UnhandledError"
	W0729 18:46:14.329589       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.72.224:8443: connect: connection refused
	E0729 18:46:14.329630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.224:8443: connect: connection refused" logger="UnhandledError"
	W0729 18:46:15.648584       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-695907&limit=500&resourceVersion=0": dial tcp 192.168.72.224:8443: connect: connection refused
	E0729 18:46:15.649857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-695907&limit=500&resourceVersion=0\": dial tcp 192.168.72.224:8443: connect: connection refused" logger="UnhandledError"
	I0729 18:46:18.598311       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:46:18.598417       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:46:19.800295       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [904cfee5e6ac46a5d1267ff663f1941dc64504d124b63e510087962be14c53e1] <==
	I0729 18:46:16.728975       1 serving.go:386] Generated self-signed cert in-memory
	W0729 18:46:18.502926       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 18:46:18.503096       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:46:18.503129       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:46:18.503207       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:46:18.550744       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 18:46:18.551443       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:46:18.556093       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 18:46:18.556253       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:46:18.556295       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 18:46:18.556328       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 18:46:18.656550       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a1140dcc277cb6f875d67b654c1bece822c33f23b5093841bf57f711c228b5e0] <==
	I0729 18:46:11.047427       1 serving.go:386] Generated self-signed cert in-memory
	W0729 18:46:12.557485       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 18:46:12.557523       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:46:12.557534       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:46:12.557541       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:46:12.627713       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 18:46:12.627763       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:46:12.635141       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 18:46:12.657548       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 18:46:12.665408       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:46:12.665475       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 18:46:12.896926       1 server.go:237] "waiting for handlers to sync" err="context canceled"
	I0729 18:46:12.897479       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 18:46:12.897577       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0729 18:46:12.899236       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:15.202068    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f263ce08b05791d184e5e434d3f88c0b-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-695907\" (UID: \"f263ce08b05791d184e5e434d3f88c0b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-695907"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:15.202098    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f263ce08b05791d184e5e434d3f88c0b-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-695907\" (UID: \"f263ce08b05791d184e5e434d3f88c0b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-695907"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:15.202129    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/e6e531a801deae9292a4f878009b8b89-etcd-certs\") pod \"etcd-kubernetes-upgrade-695907\" (UID: \"e6e531a801deae9292a4f878009b8b89\") " pod="kube-system/etcd-kubernetes-upgrade-695907"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:15.202185    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/e6e531a801deae9292a4f878009b8b89-etcd-data\") pod \"etcd-kubernetes-upgrade-695907\" (UID: \"e6e531a801deae9292a4f878009b8b89\") " pod="kube-system/etcd-kubernetes-upgrade-695907"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:15.292920    3430 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-695907"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: E0729 18:46:15.293964    3430 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.224:8443: connect: connection refused" node="kubernetes-upgrade-695907"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:15.404888    3430 scope.go:117] "RemoveContainer" containerID="a4c5a4a675df15f692c1ed0845b1bab4833cc12a8936ff55a35fb021408038ea"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:15.410123    3430 scope.go:117] "RemoveContainer" containerID="8a419b5f089f45d260a88ea38deffe9f1ea10b7afbdc794b14cc838c067fb724"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:15.427045    3430 scope.go:117] "RemoveContainer" containerID="909fd73182bfda0fb01dd5624aeebe8b58f9a6b4486c434405b03201468dab2c"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:15.428917    3430 scope.go:117] "RemoveContainer" containerID="a1140dcc277cb6f875d67b654c1bece822c33f23b5093841bf57f711c228b5e0"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: E0729 18:46:15.584013    3430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-695907?timeout=10s\": dial tcp 192.168.72.224:8443: connect: connection refused" interval="800ms"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:15.696456    3430 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-695907"
	Jul 29 18:46:15 kubernetes-upgrade-695907 kubelet[3430]: E0729 18:46:15.697844    3430 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.224:8443: connect: connection refused" node="kubernetes-upgrade-695907"
	Jul 29 18:46:16 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:16.500398    3430 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-695907"
	Jul 29 18:46:18 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:18.643296    3430 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-695907"
	Jul 29 18:46:18 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:18.643467    3430 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-695907"
	Jul 29 18:46:18 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:18.643496    3430 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 18:46:18 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:18.644566    3430 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 18:46:18 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:18.948066    3430 apiserver.go:52] "Watching apiserver"
	Jul 29 18:46:18 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:18.975833    3430 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 29 18:46:19 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:19.075331    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf5f419f-32ac-4dad-9f03-e878e315d9c2-lib-modules\") pod \"kube-proxy-qbrql\" (UID: \"cf5f419f-32ac-4dad-9f03-e878e315d9c2\") " pod="kube-system/kube-proxy-qbrql"
	Jul 29 18:46:19 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:19.075633    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1740f978-1e5b-48bd-89e8-9396ae604d7c-tmp\") pod \"storage-provisioner\" (UID: \"1740f978-1e5b-48bd-89e8-9396ae604d7c\") " pod="kube-system/storage-provisioner"
	Jul 29 18:46:19 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:19.075686    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf5f419f-32ac-4dad-9f03-e878e315d9c2-xtables-lock\") pod \"kube-proxy-qbrql\" (UID: \"cf5f419f-32ac-4dad-9f03-e878e315d9c2\") " pod="kube-system/kube-proxy-qbrql"
	Jul 29 18:46:19 kubernetes-upgrade-695907 kubelet[3430]: E0729 18:46:19.203574    3430 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-695907\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-695907"
	Jul 29 18:46:19 kubernetes-upgrade-695907 kubelet[3430]: I0729 18:46:19.257712    3430 scope.go:117] "RemoveContainer" containerID="024cd52bb30891e372259703e9491bd7d40ff69c57285ce45b27083eaa249ddc"
	
	
	==> storage-provisioner [3a209406878b05e8716f58fb6846887d5739ad5c72b320fd5638fb588b8d4fd4] <==
	I0729 18:46:09.880645       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 18:46:12.639972       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 18:46:12.664600       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0729 18:46:13.754518       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0729 18:46:18.674259       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 18:46:18.675316       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-695907_f6034b34-db61-4aba-97a8-2674a7ee37b0!
	I0729 18:46:18.675193       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"20ff7d72-389a-4099-8692-29b936be7054", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-695907_f6034b34-db61-4aba-97a8-2674a7ee37b0 became leader
	I0729 18:46:18.777318       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-695907_f6034b34-db61-4aba-97a8-2674a7ee37b0!
	
	
	==> storage-provisioner [84ad1e619be9c1a2a13315125eee3c0d41796503789b4374c7d882524a9d850e] <==
	I0729 18:45:50.453128       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-695907 -n kubernetes-upgrade-695907
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-695907 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-695907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-695907
--- FAIL: TestKubernetesUpgrade (426.42s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (55.21s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-134415 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-134415 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.490947514s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-134415] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-134415" primary control-plane node in "pause-134415" cluster
	* Updating the running kvm2 "pause-134415" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-134415" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:43:49.259640  135833 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:43:49.259902  135833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:43:49.259914  135833 out.go:304] Setting ErrFile to fd 2...
	I0729 18:43:49.259920  135833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:43:49.260113  135833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:43:49.260659  135833 out.go:298] Setting JSON to false
	I0729 18:43:49.261610  135833 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12349,"bootTime":1722266280,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:43:49.261676  135833 start.go:139] virtualization: kvm guest
	I0729 18:43:49.264286  135833 out.go:177] * [pause-134415] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:43:49.265702  135833 notify.go:220] Checking for updates...
	I0729 18:43:49.265706  135833 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:43:49.267240  135833 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:43:49.268487  135833 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:43:49.269643  135833 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:43:49.270751  135833 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:43:49.271843  135833 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:43:49.273309  135833 config.go:182] Loaded profile config "pause-134415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:43:49.273675  135833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:43:49.273715  135833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:43:49.288233  135833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0729 18:43:49.288595  135833 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:43:49.289215  135833 main.go:141] libmachine: Using API Version  1
	I0729 18:43:49.289246  135833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:43:49.289594  135833 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:43:49.289778  135833 main.go:141] libmachine: (pause-134415) Calling .DriverName
	I0729 18:43:49.290050  135833 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:43:49.290316  135833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:43:49.290347  135833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:43:49.304263  135833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0729 18:43:49.304601  135833 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:43:49.305076  135833 main.go:141] libmachine: Using API Version  1
	I0729 18:43:49.305096  135833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:43:49.305388  135833 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:43:49.305593  135833 main.go:141] libmachine: (pause-134415) Calling .DriverName
	I0729 18:43:49.341043  135833 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:43:49.342479  135833 start.go:297] selected driver: kvm2
	I0729 18:43:49.342496  135833 start.go:901] validating driver "kvm2" against &{Name:pause-134415 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-134415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.77 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:43:49.342664  135833 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:43:49.343000  135833 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:43:49.343071  135833 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:43:49.358404  135833 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:43:49.359129  135833 cni.go:84] Creating CNI manager for ""
	I0729 18:43:49.359147  135833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:43:49.359241  135833 start.go:340] cluster config:
	{Name:pause-134415 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-134415 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.77 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:43:49.359391  135833 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:43:49.361838  135833 out.go:177] * Starting "pause-134415" primary control-plane node in "pause-134415" cluster
	I0729 18:43:49.362946  135833 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:43:49.362980  135833 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:43:49.362992  135833 cache.go:56] Caching tarball of preloaded images
	I0729 18:43:49.363075  135833 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:43:49.363090  135833 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:43:49.363238  135833 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/pause-134415/config.json ...
	I0729 18:43:49.363434  135833 start.go:360] acquireMachinesLock for pause-134415: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:44:02.273382  135833 start.go:364] duration metric: took 12.909906163s to acquireMachinesLock for "pause-134415"
	I0729 18:44:02.273433  135833 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:44:02.273443  135833 fix.go:54] fixHost starting: 
	I0729 18:44:02.273835  135833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:44:02.273888  135833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:44:02.291388  135833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
	I0729 18:44:02.291772  135833 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:44:02.292306  135833 main.go:141] libmachine: Using API Version  1
	I0729 18:44:02.292330  135833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:44:02.292652  135833 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:44:02.292882  135833 main.go:141] libmachine: (pause-134415) Calling .DriverName
	I0729 18:44:02.293049  135833 main.go:141] libmachine: (pause-134415) Calling .GetState
	I0729 18:44:02.294580  135833 fix.go:112] recreateIfNeeded on pause-134415: state=Running err=<nil>
	W0729 18:44:02.294598  135833 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:44:02.296713  135833 out.go:177] * Updating the running kvm2 "pause-134415" VM ...
	I0729 18:44:02.298283  135833 machine.go:94] provisionDockerMachine start ...
	I0729 18:44:02.298312  135833 main.go:141] libmachine: (pause-134415) Calling .DriverName
	I0729 18:44:02.298550  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHHostname
	I0729 18:44:02.301523  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:02.303747  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:02.303768  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:02.303955  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHPort
	I0729 18:44:02.304100  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:02.304275  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:02.304445  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHUsername
	I0729 18:44:02.304619  135833 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:02.304889  135833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0729 18:44:02.304907  135833 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:44:02.417815  135833 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-134415
	
	I0729 18:44:02.417849  135833 main.go:141] libmachine: (pause-134415) Calling .GetMachineName
	I0729 18:44:02.418111  135833 buildroot.go:166] provisioning hostname "pause-134415"
	I0729 18:44:02.418137  135833 main.go:141] libmachine: (pause-134415) Calling .GetMachineName
	I0729 18:44:02.418335  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHHostname
	I0729 18:44:02.421276  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:02.421685  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:02.421712  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:02.421875  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHPort
	I0729 18:44:02.422066  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:02.422236  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:02.422392  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHUsername
	I0729 18:44:02.422560  135833 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:02.422778  135833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0729 18:44:02.422792  135833 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-134415 && echo "pause-134415" | sudo tee /etc/hostname
	I0729 18:44:02.552647  135833 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-134415
	
	I0729 18:44:02.552680  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHHostname
	I0729 18:44:02.556004  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:02.556408  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:02.556440  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:02.556702  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHPort
	I0729 18:44:02.556906  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:02.557134  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:02.557289  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHUsername
	I0729 18:44:02.557481  135833 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:02.557702  135833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0729 18:44:02.557720  135833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-134415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-134415/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-134415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:44:02.678204  135833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:44:02.678240  135833 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 18:44:02.678266  135833 buildroot.go:174] setting up certificates
	I0729 18:44:02.678279  135833 provision.go:84] configureAuth start
	I0729 18:44:02.678292  135833 main.go:141] libmachine: (pause-134415) Calling .GetMachineName
	I0729 18:44:02.678632  135833 main.go:141] libmachine: (pause-134415) Calling .GetIP
	I0729 18:44:02.681887  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:02.682320  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:02.682347  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:02.682577  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHHostname
	I0729 18:44:02.685331  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:02.685776  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:02.685803  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:02.685978  135833 provision.go:143] copyHostCerts
	I0729 18:44:02.686031  135833 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 18:44:02.686041  135833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:44:02.686098  135833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 18:44:02.686236  135833 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 18:44:02.686251  135833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:44:02.686283  135833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 18:44:02.686353  135833 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 18:44:02.686360  135833 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:44:02.686381  135833 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 18:44:02.686426  135833 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.pause-134415 san=[127.0.0.1 192.168.61.77 localhost minikube pause-134415]
	I0729 18:44:03.173736  135833 provision.go:177] copyRemoteCerts
	I0729 18:44:03.173811  135833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:44:03.173842  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHHostname
	I0729 18:44:03.176640  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:03.177015  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:03.177046  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:03.177306  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHPort
	I0729 18:44:03.177510  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:03.177717  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHUsername
	I0729 18:44:03.177873  135833 sshutil.go:53] new ssh client: &{IP:192.168.61.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/pause-134415/id_rsa Username:docker}
	I0729 18:44:03.269351  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:44:03.298155  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 18:44:03.325586  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:44:03.351982  135833 provision.go:87] duration metric: took 673.679797ms to configureAuth
	I0729 18:44:03.352013  135833 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:44:03.352289  135833 config.go:182] Loaded profile config "pause-134415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:03.352396  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHHostname
	I0729 18:44:03.355501  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:03.355848  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:03.355886  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:03.356077  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHPort
	I0729 18:44:03.356276  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:03.356415  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:03.356523  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHUsername
	I0729 18:44:03.356683  135833 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:03.356875  135833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0729 18:44:03.356896  135833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:44:10.991092  135833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:44:10.991130  135833 machine.go:97] duration metric: took 8.692827629s to provisionDockerMachine
	I0729 18:44:10.991145  135833 start.go:293] postStartSetup for "pause-134415" (driver="kvm2")
	I0729 18:44:10.991158  135833 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:44:10.991199  135833 main.go:141] libmachine: (pause-134415) Calling .DriverName
	I0729 18:44:10.991554  135833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:44:10.991581  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHHostname
	I0729 18:44:10.994632  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:10.995007  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:10.995034  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:10.995261  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHPort
	I0729 18:44:10.995471  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:10.995656  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHUsername
	I0729 18:44:10.995790  135833 sshutil.go:53] new ssh client: &{IP:192.168.61.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/pause-134415/id_rsa Username:docker}
	I0729 18:44:11.105353  135833 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:44:11.110646  135833 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:44:11.110677  135833 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:44:11.110758  135833 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:44:11.110860  135833 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:44:11.110995  135833 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:44:11.122209  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:44:11.148242  135833 start.go:296] duration metric: took 157.080686ms for postStartSetup
	I0729 18:44:11.148288  135833 fix.go:56] duration metric: took 8.87484515s for fixHost
	I0729 18:44:11.148316  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHHostname
	I0729 18:44:11.151453  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:11.151859  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:11.151894  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:11.152193  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHPort
	I0729 18:44:11.152408  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:11.152591  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:11.152776  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHUsername
	I0729 18:44:11.152972  135833 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:11.153186  135833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0729 18:44:11.153200  135833 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 18:44:11.271133  135833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278651.263243724
	
	I0729 18:44:11.271161  135833 fix.go:216] guest clock: 1722278651.263243724
	I0729 18:44:11.271169  135833 fix.go:229] Guest: 2024-07-29 18:44:11.263243724 +0000 UTC Remote: 2024-07-29 18:44:11.148294172 +0000 UTC m=+21.924155514 (delta=114.949552ms)
	I0729 18:44:11.271217  135833 fix.go:200] guest clock delta is within tolerance: 114.949552ms
	I0729 18:44:11.271229  135833 start.go:83] releasing machines lock for "pause-134415", held for 8.997816674s
	I0729 18:44:11.271256  135833 main.go:141] libmachine: (pause-134415) Calling .DriverName
	I0729 18:44:11.271589  135833 main.go:141] libmachine: (pause-134415) Calling .GetIP
	I0729 18:44:11.274568  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:11.274981  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:11.275012  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:11.275283  135833 main.go:141] libmachine: (pause-134415) Calling .DriverName
	I0729 18:44:11.275817  135833 main.go:141] libmachine: (pause-134415) Calling .DriverName
	I0729 18:44:11.275986  135833 main.go:141] libmachine: (pause-134415) Calling .DriverName
	I0729 18:44:11.276091  135833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:44:11.276156  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHHostname
	I0729 18:44:11.276186  135833 ssh_runner.go:195] Run: cat /version.json
	I0729 18:44:11.276231  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHHostname
	I0729 18:44:11.279192  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:11.279613  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:11.279653  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:11.279875  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHPort
	I0729 18:44:11.280031  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:11.280075  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:11.280225  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHUsername
	I0729 18:44:11.280390  135833 sshutil.go:53] new ssh client: &{IP:192.168.61.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/pause-134415/id_rsa Username:docker}
	I0729 18:44:11.280427  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:11.280460  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:11.280654  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHPort
	I0729 18:44:11.280808  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHKeyPath
	I0729 18:44:11.280961  135833 main.go:141] libmachine: (pause-134415) Calling .GetSSHUsername
	I0729 18:44:11.281116  135833 sshutil.go:53] new ssh client: &{IP:192.168.61.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/pause-134415/id_rsa Username:docker}
	I0729 18:44:11.387893  135833 ssh_runner.go:195] Run: systemctl --version
	I0729 18:44:11.394791  135833 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:44:11.566907  135833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:44:11.575014  135833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:44:11.575076  135833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:44:11.584733  135833 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 18:44:11.584762  135833 start.go:495] detecting cgroup driver to use...
	I0729 18:44:11.584843  135833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:44:11.609382  135833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:44:11.628177  135833 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:44:11.628240  135833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:44:11.641807  135833 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:44:11.656180  135833 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:44:11.832624  135833 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:44:11.996522  135833 docker.go:233] disabling docker service ...
	I0729 18:44:11.996604  135833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:44:12.026908  135833 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:44:12.048714  135833 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:44:12.336418  135833 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:44:12.583016  135833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:44:12.627801  135833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:44:12.742393  135833 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:44:12.742469  135833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:44:12.775991  135833 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:44:12.776058  135833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:44:12.857502  135833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:44:12.909873  135833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:44:12.983700  135833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:44:13.032004  135833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:44:13.060740  135833 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:44:13.086032  135833 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:44:13.103933  135833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:44:13.117788  135833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:44:13.129957  135833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:44:13.347947  135833 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:44:13.862953  135833 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:44:13.863035  135833 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:44:13.869117  135833 start.go:563] Will wait 60s for crictl version
	I0729 18:44:13.869161  135833 ssh_runner.go:195] Run: which crictl
	I0729 18:44:13.873204  135833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:44:13.915458  135833 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:44:13.915555  135833 ssh_runner.go:195] Run: crio --version
	I0729 18:44:13.947739  135833 ssh_runner.go:195] Run: crio --version
	I0729 18:44:13.978444  135833 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:44:13.979789  135833 main.go:141] libmachine: (pause-134415) Calling .GetIP
	I0729 18:44:13.982746  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:13.983124  135833 main.go:141] libmachine: (pause-134415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:b1:c4", ip: ""} in network mk-pause-134415: {Iface:virbr3 ExpiryTime:2024-07-29 19:43:02 +0000 UTC Type:0 Mac:52:54:00:d6:b1:c4 Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:pause-134415 Clientid:01:52:54:00:d6:b1:c4}
	I0729 18:44:13.983150  135833 main.go:141] libmachine: (pause-134415) DBG | domain pause-134415 has defined IP address 192.168.61.77 and MAC address 52:54:00:d6:b1:c4 in network mk-pause-134415
	I0729 18:44:13.983405  135833 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 18:44:13.987793  135833 kubeadm.go:883] updating cluster {Name:pause-134415 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-134415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.77 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:44:13.987982  135833 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:44:13.988069  135833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:44:14.030379  135833 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:44:14.030401  135833 crio.go:433] Images already preloaded, skipping extraction
	I0729 18:44:14.030458  135833 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:44:14.062730  135833 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:44:14.062753  135833 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:44:14.062763  135833 kubeadm.go:934] updating node { 192.168.61.77 8443 v1.30.3 crio true true} ...
	I0729 18:44:14.062895  135833 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-134415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-134415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:44:14.062978  135833 ssh_runner.go:195] Run: crio config
	I0729 18:44:14.115131  135833 cni.go:84] Creating CNI manager for ""
	I0729 18:44:14.115152  135833 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:44:14.115163  135833 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:44:14.115185  135833 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.77 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-134415 NodeName:pause-134415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:44:14.115316  135833 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-134415"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:44:14.115379  135833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:44:14.125948  135833 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:44:14.126021  135833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:44:14.135416  135833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 18:44:14.151889  135833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:44:14.167909  135833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 18:44:14.184929  135833 ssh_runner.go:195] Run: grep 192.168.61.77	control-plane.minikube.internal$ /etc/hosts
	I0729 18:44:14.188982  135833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:44:14.342778  135833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:44:14.357848  135833 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/pause-134415 for IP: 192.168.61.77
	I0729 18:44:14.357871  135833 certs.go:194] generating shared ca certs ...
	I0729 18:44:14.357894  135833 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:44:14.358063  135833 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 18:44:14.358119  135833 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 18:44:14.358133  135833 certs.go:256] generating profile certs ...
	I0729 18:44:14.358234  135833 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/pause-134415/client.key
	I0729 18:44:14.358321  135833 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/pause-134415/apiserver.key.5824ec0e
	I0729 18:44:14.358377  135833 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/pause-134415/proxy-client.key
	I0729 18:44:14.358523  135833 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 18:44:14.358572  135833 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 18:44:14.358586  135833 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:44:14.358631  135833 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:44:14.358665  135833 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:44:14.358696  135833 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 18:44:14.358757  135833 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:44:14.359526  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:44:14.387952  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:44:14.412100  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:44:14.438300  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:44:14.465142  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/pause-134415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 18:44:14.492986  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/pause-134415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:44:14.518894  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/pause-134415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:44:14.542726  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/pause-134415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:44:14.567500  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:44:14.592851  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 18:44:14.616035  135833 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 18:44:14.639257  135833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:44:14.655157  135833 ssh_runner.go:195] Run: openssl version
	I0729 18:44:14.660732  135833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 18:44:14.671565  135833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 18:44:14.675896  135833 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:44:14.675941  135833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 18:44:14.681533  135833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:44:14.690633  135833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:44:14.701438  135833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:44:14.705942  135833 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:44:14.705985  135833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:44:14.711683  135833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:44:14.721209  135833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 18:44:14.732244  135833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 18:44:14.737343  135833 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:44:14.737389  135833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 18:44:14.743086  135833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 18:44:14.752422  135833 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:44:14.756995  135833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:44:14.763007  135833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:44:14.768627  135833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:44:14.774155  135833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:44:14.779491  135833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:44:14.784929  135833 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:44:14.790568  135833 kubeadm.go:392] StartCluster: {Name:pause-134415 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-134415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.77 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:44:14.790731  135833 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:44:14.790802  135833 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:44:14.836145  135833 cri.go:89] found id: "68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff"
	I0729 18:44:14.836171  135833 cri.go:89] found id: "e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd"
	I0729 18:44:14.836178  135833 cri.go:89] found id: "d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308"
	I0729 18:44:14.836183  135833 cri.go:89] found id: "7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f"
	I0729 18:44:14.836188  135833 cri.go:89] found id: "4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a"
	I0729 18:44:14.836193  135833 cri.go:89] found id: "e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88"
	I0729 18:44:14.836197  135833 cri.go:89] found id: "40b21d037c4d35d751216e79bcc1cc94b246d71bc8b75eb2b9276f4218b96466"
	I0729 18:44:14.836201  135833 cri.go:89] found id: "d89e3928a8657e1367b89512e222a3fee9b58df8a840feb4a22c3db334683b9e"
	I0729 18:44:14.836211  135833 cri.go:89] found id: "8ebf55a1fd4af3ae891606af5542ddd065993c58db6736819586746b29adccaf"
	I0729 18:44:14.836220  135833 cri.go:89] found id: "4e09aa304d07676b7aa2211e3a4b6841717fed41081ebeff4e04c93a918a062c"
	I0729 18:44:14.836224  135833 cri.go:89] found id: "34df78294aee9d6a5f54c998dcb44defd04ac492f58a3720bc007b33dabcfa3f"
	I0729 18:44:14.836229  135833 cri.go:89] found id: ""
	I0729 18:44:14.836286  135833 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-134415 -n pause-134415
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-134415 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-134415 logs -n 25: (1.281264314s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-695907       | kubernetes-upgrade-695907 | jenkins | v1.33.1 | 29 Jul 24 18:39 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-801126        | force-systemd-env-801126  | jenkins | v1.33.1 | 29 Jul 24 18:40 UTC | 29 Jul 24 18:40 UTC |
	| start   | -p stopped-upgrade-931829          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 18:40 UTC | 29 Jul 24 18:41 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:40 UTC | 29 Jul 24 18:41 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-778169             | offline-crio-778169       | jenkins | v1.33.1 | 29 Jul 24 18:40 UTC | 29 Jul 24 18:40 UTC |
	| start   | -p running-upgrade-459882          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 18:40 UTC | 29 Jul 24 18:42 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:41 UTC | 29 Jul 24 18:41 UTC |
	| start   | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:41 UTC | 29 Jul 24 18:42 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-931829 stop        | minikube                  | jenkins | v1.26.0 | 29 Jul 24 18:41 UTC | 29 Jul 24 18:41 UTC |
	| start   | -p stopped-upgrade-931829          | stopped-upgrade-931829    | jenkins | v1.33.1 | 29 Jul 24 18:41 UTC | 29 Jul 24 18:42 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-459882          | running-upgrade-459882    | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:43 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-790573 sudo        | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:42 UTC |
	| start   | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:42 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-790573 sudo        | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:42 UTC |
	| start   | -p pause-134415 --memory=2048      | pause-134415              | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:43 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-931829          | stopped-upgrade-931829    | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:42 UTC |
	| start   | -p cert-expiration-974855          | cert-expiration-974855    | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:43 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-459882          | running-upgrade-459882    | jenkins | v1.33.1 | 29 Jul 24 18:43 UTC | 29 Jul 24 18:43 UTC |
	| start   | -p force-systemd-flag-729652       | force-systemd-flag-729652 | jenkins | v1.33.1 | 29 Jul 24 18:43 UTC | 29 Jul 24 18:44 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-134415                    | pause-134415              | jenkins | v1.33.1 | 29 Jul 24 18:43 UTC | 29 Jul 24 18:44 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-729652 ssh cat  | force-systemd-flag-729652 | jenkins | v1.33.1 | 29 Jul 24 18:44 UTC | 29 Jul 24 18:44 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-729652       | force-systemd-flag-729652 | jenkins | v1.33.1 | 29 Jul 24 18:44 UTC | 29 Jul 24 18:44 UTC |
	| start   | -p cert-options-899685             | cert-options-899685       | jenkins | v1.33.1 | 29 Jul 24 18:44 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:44:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:44:23.809900  136235 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:44:23.810091  136235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:44:23.810094  136235 out.go:304] Setting ErrFile to fd 2...
	I0729 18:44:23.810097  136235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:44:23.810282  136235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:44:23.810839  136235 out.go:298] Setting JSON to false
	I0729 18:44:23.811725  136235 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12384,"bootTime":1722266280,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:44:23.811772  136235 start.go:139] virtualization: kvm guest
	I0729 18:44:23.814288  136235 out.go:177] * [cert-options-899685] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:44:23.815732  136235 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:44:23.815765  136235 notify.go:220] Checking for updates...
	I0729 18:44:23.818138  136235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:44:23.819299  136235 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:44:23.820562  136235 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:44:23.821738  136235 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:44:23.823103  136235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:44:23.824769  136235 config.go:182] Loaded profile config "cert-expiration-974855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:23.824887  136235 config.go:182] Loaded profile config "kubernetes-upgrade-695907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:44:23.824994  136235 config.go:182] Loaded profile config "pause-134415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:23.825061  136235 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:44:23.859727  136235 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:44:23.860822  136235 start.go:297] selected driver: kvm2
	I0729 18:44:23.860829  136235 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:44:23.860837  136235 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:44:23.861787  136235 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:44:23.861875  136235 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:44:23.876295  136235 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:44:23.876326  136235 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:44:23.876613  136235 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 18:44:23.876634  136235 cni.go:84] Creating CNI manager for ""
	I0729 18:44:23.876648  136235 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:44:23.876657  136235 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:44:23.876711  136235 start.go:340] cluster config:
	{Name:cert-options-899685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-899685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0729 18:44:23.876830  136235 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:44:23.878496  136235 out.go:177] * Starting "cert-options-899685" primary control-plane node in "cert-options-899685" cluster
	I0729 18:44:23.879634  136235 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:44:23.879660  136235 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:44:23.879665  136235 cache.go:56] Caching tarball of preloaded images
	I0729 18:44:23.879743  136235 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:44:23.879749  136235 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:44:23.879842  136235 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/cert-options-899685/config.json ...
	I0729 18:44:23.879856  136235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/cert-options-899685/config.json: {Name:mk14233c801f43586c4a6fbf2de2f0d9abea3770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:44:23.879988  136235 start.go:360] acquireMachinesLock for cert-options-899685: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:44:23.880018  136235 start.go:364] duration metric: took 19.061µs to acquireMachinesLock for "cert-options-899685"
	I0729 18:44:23.880035  136235 start.go:93] Provisioning new machine with config: &{Name:cert-options-899685 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-899685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:44:23.880088  136235 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:44:22.186968  135833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:44:22.198438  135833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:44:22.218240  135833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:44:22.229293  135833 system_pods.go:59] 6 kube-system pods found
	I0729 18:44:22.229331  135833 system_pods.go:61] "coredns-7db6d8ff4d-g6tp9" [769ed268-b082-415e-b416-a14e68a0084f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:44:22.229342  135833 system_pods.go:61] "etcd-pause-134415" [281750af-8362-4638-b0db-01f5f69bcd38] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:44:22.229351  135833 system_pods.go:61] "kube-apiserver-pause-134415" [8a19558c-02cd-472d-a10b-255ad2a3dc66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:44:22.229361  135833 system_pods.go:61] "kube-controller-manager-pause-134415" [b40ab25f-2497-4fda-8e23-5e52e1150e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:44:22.229372  135833 system_pods.go:61] "kube-proxy-sm2kx" [6fa9e9f4-9f39-41f5-9d79-4f394201011f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:44:22.229387  135833 system_pods.go:61] "kube-scheduler-pause-134415" [4e7d5033-71d1-4921-af2f-37f868cc0896] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:44:22.229402  135833 system_pods.go:74] duration metric: took 11.138932ms to wait for pod list to return data ...
	I0729 18:44:22.229414  135833 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:44:22.233223  135833 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:44:22.233252  135833 node_conditions.go:123] node cpu capacity is 2
	I0729 18:44:22.233265  135833 node_conditions.go:105] duration metric: took 3.841523ms to run NodePressure ...
	I0729 18:44:22.233289  135833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:44:22.528972  135833 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:44:22.533866  135833 kubeadm.go:739] kubelet initialised
	I0729 18:44:22.533886  135833 kubeadm.go:740] duration metric: took 4.888474ms waiting for restarted kubelet to initialise ...
	I0729 18:44:22.533896  135833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:44:22.538147  135833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:23.881544  136235 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 18:44:23.881674  136235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:44:23.881704  136235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:44:23.895411  136235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0729 18:44:23.895946  136235 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:44:23.896489  136235 main.go:141] libmachine: Using API Version  1
	I0729 18:44:23.896524  136235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:44:23.896919  136235 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:44:23.897138  136235 main.go:141] libmachine: (cert-options-899685) Calling .GetMachineName
	I0729 18:44:23.897283  136235 main.go:141] libmachine: (cert-options-899685) Calling .DriverName
	I0729 18:44:23.897423  136235 start.go:159] libmachine.API.Create for "cert-options-899685" (driver="kvm2")
	I0729 18:44:23.897444  136235 client.go:168] LocalClient.Create starting
	I0729 18:44:23.897471  136235 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 18:44:23.897498  136235 main.go:141] libmachine: Decoding PEM data...
	I0729 18:44:23.897511  136235 main.go:141] libmachine: Parsing certificate...
	I0729 18:44:23.897567  136235 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 18:44:23.897580  136235 main.go:141] libmachine: Decoding PEM data...
	I0729 18:44:23.897589  136235 main.go:141] libmachine: Parsing certificate...
	I0729 18:44:23.897599  136235 main.go:141] libmachine: Running pre-create checks...
	I0729 18:44:23.897605  136235 main.go:141] libmachine: (cert-options-899685) Calling .PreCreateCheck
	I0729 18:44:23.897946  136235 main.go:141] libmachine: (cert-options-899685) Calling .GetConfigRaw
	I0729 18:44:23.898298  136235 main.go:141] libmachine: Creating machine...
	I0729 18:44:23.898304  136235 main.go:141] libmachine: (cert-options-899685) Calling .Create
	I0729 18:44:23.898426  136235 main.go:141] libmachine: (cert-options-899685) Creating KVM machine...
	I0729 18:44:23.899708  136235 main.go:141] libmachine: (cert-options-899685) DBG | found existing default KVM network
	I0729 18:44:23.902264  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.902089  136258 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 18:44:23.903167  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.903098  136258 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:7b:af:8a} reservation:<nil>}
	I0729 18:44:23.903958  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.903871  136258 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:1b:5e:f1} reservation:<nil>}
	I0729 18:44:23.904634  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.904575  136258 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:78:c8:3d} reservation:<nil>}
	I0729 18:44:23.905685  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.905609  136258 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00041e750}
	I0729 18:44:23.905716  136235 main.go:141] libmachine: (cert-options-899685) DBG | created network xml: 
	I0729 18:44:23.905733  136235 main.go:141] libmachine: (cert-options-899685) DBG | <network>
	I0729 18:44:23.905741  136235 main.go:141] libmachine: (cert-options-899685) DBG |   <name>mk-cert-options-899685</name>
	I0729 18:44:23.905748  136235 main.go:141] libmachine: (cert-options-899685) DBG |   <dns enable='no'/>
	I0729 18:44:23.905760  136235 main.go:141] libmachine: (cert-options-899685) DBG |   
	I0729 18:44:23.905766  136235 main.go:141] libmachine: (cert-options-899685) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0729 18:44:23.905773  136235 main.go:141] libmachine: (cert-options-899685) DBG |     <dhcp>
	I0729 18:44:23.905780  136235 main.go:141] libmachine: (cert-options-899685) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0729 18:44:23.905786  136235 main.go:141] libmachine: (cert-options-899685) DBG |     </dhcp>
	I0729 18:44:23.905796  136235 main.go:141] libmachine: (cert-options-899685) DBG |   </ip>
	I0729 18:44:23.905811  136235 main.go:141] libmachine: (cert-options-899685) DBG |   
	I0729 18:44:23.905816  136235 main.go:141] libmachine: (cert-options-899685) DBG | </network>
	I0729 18:44:23.905827  136235 main.go:141] libmachine: (cert-options-899685) DBG | 
	I0729 18:44:23.910636  136235 main.go:141] libmachine: (cert-options-899685) DBG | trying to create private KVM network mk-cert-options-899685 192.168.83.0/24...
	I0729 18:44:23.980062  136235 main.go:141] libmachine: (cert-options-899685) DBG | private KVM network mk-cert-options-899685 192.168.83.0/24 created
	I0729 18:44:23.980175  136235 main.go:141] libmachine: (cert-options-899685) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685 ...
	I0729 18:44:23.980205  136235 main.go:141] libmachine: (cert-options-899685) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 18:44:23.980216  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.980169  136258 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:44:23.980334  136235 main.go:141] libmachine: (cert-options-899685) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 18:44:24.229607  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:24.229431  136258 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685/id_rsa...
	I0729 18:44:24.417430  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:24.417271  136258 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685/cert-options-899685.rawdisk...
	I0729 18:44:24.417456  136235 main.go:141] libmachine: (cert-options-899685) DBG | Writing magic tar header
	I0729 18:44:24.417473  136235 main.go:141] libmachine: (cert-options-899685) DBG | Writing SSH key tar header
	I0729 18:44:24.417484  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:24.417420  136258 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685 ...
	I0729 18:44:24.417628  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685 (perms=drwx------)
	I0729 18:44:24.417657  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:44:24.417668  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685
	I0729 18:44:24.417678  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 18:44:24.417691  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 18:44:24.417700  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:44:24.417709  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:44:24.417715  136235 main.go:141] libmachine: (cert-options-899685) Creating domain...
	I0729 18:44:24.417730  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 18:44:24.417739  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:44:24.417747  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 18:44:24.417757  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:44:24.417772  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:44:24.417782  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home
	I0729 18:44:24.417788  136235 main.go:141] libmachine: (cert-options-899685) DBG | Skipping /home - not owner
	I0729 18:44:24.418916  136235 main.go:141] libmachine: (cert-options-899685) define libvirt domain using xml: 
	I0729 18:44:24.418924  136235 main.go:141] libmachine: (cert-options-899685) <domain type='kvm'>
	I0729 18:44:24.418929  136235 main.go:141] libmachine: (cert-options-899685)   <name>cert-options-899685</name>
	I0729 18:44:24.418942  136235 main.go:141] libmachine: (cert-options-899685)   <memory unit='MiB'>2048</memory>
	I0729 18:44:24.418947  136235 main.go:141] libmachine: (cert-options-899685)   <vcpu>2</vcpu>
	I0729 18:44:24.418953  136235 main.go:141] libmachine: (cert-options-899685)   <features>
	I0729 18:44:24.418957  136235 main.go:141] libmachine: (cert-options-899685)     <acpi/>
	I0729 18:44:24.418961  136235 main.go:141] libmachine: (cert-options-899685)     <apic/>
	I0729 18:44:24.418965  136235 main.go:141] libmachine: (cert-options-899685)     <pae/>
	I0729 18:44:24.418968  136235 main.go:141] libmachine: (cert-options-899685)     
	I0729 18:44:24.418972  136235 main.go:141] libmachine: (cert-options-899685)   </features>
	I0729 18:44:24.418976  136235 main.go:141] libmachine: (cert-options-899685)   <cpu mode='host-passthrough'>
	I0729 18:44:24.418980  136235 main.go:141] libmachine: (cert-options-899685)   
	I0729 18:44:24.418983  136235 main.go:141] libmachine: (cert-options-899685)   </cpu>
	I0729 18:44:24.418987  136235 main.go:141] libmachine: (cert-options-899685)   <os>
	I0729 18:44:24.418990  136235 main.go:141] libmachine: (cert-options-899685)     <type>hvm</type>
	I0729 18:44:24.419002  136235 main.go:141] libmachine: (cert-options-899685)     <boot dev='cdrom'/>
	I0729 18:44:24.419011  136235 main.go:141] libmachine: (cert-options-899685)     <boot dev='hd'/>
	I0729 18:44:24.419018  136235 main.go:141] libmachine: (cert-options-899685)     <bootmenu enable='no'/>
	I0729 18:44:24.419024  136235 main.go:141] libmachine: (cert-options-899685)   </os>
	I0729 18:44:24.419030  136235 main.go:141] libmachine: (cert-options-899685)   <devices>
	I0729 18:44:24.419038  136235 main.go:141] libmachine: (cert-options-899685)     <disk type='file' device='cdrom'>
	I0729 18:44:24.419050  136235 main.go:141] libmachine: (cert-options-899685)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685/boot2docker.iso'/>
	I0729 18:44:24.419054  136235 main.go:141] libmachine: (cert-options-899685)       <target dev='hdc' bus='scsi'/>
	I0729 18:44:24.419059  136235 main.go:141] libmachine: (cert-options-899685)       <readonly/>
	I0729 18:44:24.419062  136235 main.go:141] libmachine: (cert-options-899685)     </disk>
	I0729 18:44:24.419067  136235 main.go:141] libmachine: (cert-options-899685)     <disk type='file' device='disk'>
	I0729 18:44:24.419072  136235 main.go:141] libmachine: (cert-options-899685)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:44:24.419079  136235 main.go:141] libmachine: (cert-options-899685)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685/cert-options-899685.rawdisk'/>
	I0729 18:44:24.419082  136235 main.go:141] libmachine: (cert-options-899685)       <target dev='hda' bus='virtio'/>
	I0729 18:44:24.419086  136235 main.go:141] libmachine: (cert-options-899685)     </disk>
	I0729 18:44:24.419090  136235 main.go:141] libmachine: (cert-options-899685)     <interface type='network'>
	I0729 18:44:24.419095  136235 main.go:141] libmachine: (cert-options-899685)       <source network='mk-cert-options-899685'/>
	I0729 18:44:24.419102  136235 main.go:141] libmachine: (cert-options-899685)       <model type='virtio'/>
	I0729 18:44:24.419109  136235 main.go:141] libmachine: (cert-options-899685)     </interface>
	I0729 18:44:24.419117  136235 main.go:141] libmachine: (cert-options-899685)     <interface type='network'>
	I0729 18:44:24.419125  136235 main.go:141] libmachine: (cert-options-899685)       <source network='default'/>
	I0729 18:44:24.419131  136235 main.go:141] libmachine: (cert-options-899685)       <model type='virtio'/>
	I0729 18:44:24.419137  136235 main.go:141] libmachine: (cert-options-899685)     </interface>
	I0729 18:44:24.419140  136235 main.go:141] libmachine: (cert-options-899685)     <serial type='pty'>
	I0729 18:44:24.419145  136235 main.go:141] libmachine: (cert-options-899685)       <target port='0'/>
	I0729 18:44:24.419147  136235 main.go:141] libmachine: (cert-options-899685)     </serial>
	I0729 18:44:24.419154  136235 main.go:141] libmachine: (cert-options-899685)     <console type='pty'>
	I0729 18:44:24.419158  136235 main.go:141] libmachine: (cert-options-899685)       <target type='serial' port='0'/>
	I0729 18:44:24.419162  136235 main.go:141] libmachine: (cert-options-899685)     </console>
	I0729 18:44:24.419165  136235 main.go:141] libmachine: (cert-options-899685)     <rng model='virtio'>
	I0729 18:44:24.419180  136235 main.go:141] libmachine: (cert-options-899685)       <backend model='random'>/dev/random</backend>
	I0729 18:44:24.419186  136235 main.go:141] libmachine: (cert-options-899685)     </rng>
	I0729 18:44:24.419204  136235 main.go:141] libmachine: (cert-options-899685)     
	I0729 18:44:24.419215  136235 main.go:141] libmachine: (cert-options-899685)     
	I0729 18:44:24.419222  136235 main.go:141] libmachine: (cert-options-899685)   </devices>
	I0729 18:44:24.419226  136235 main.go:141] libmachine: (cert-options-899685) </domain>
	I0729 18:44:24.419233  136235 main.go:141] libmachine: (cert-options-899685) 
	I0729 18:44:24.423500  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:4b:1f:a0 in network default
	I0729 18:44:24.424036  136235 main.go:141] libmachine: (cert-options-899685) Ensuring networks are active...
	I0729 18:44:24.424054  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:24.424658  136235 main.go:141] libmachine: (cert-options-899685) Ensuring network default is active
	I0729 18:44:24.424965  136235 main.go:141] libmachine: (cert-options-899685) Ensuring network mk-cert-options-899685 is active
	I0729 18:44:24.425541  136235 main.go:141] libmachine: (cert-options-899685) Getting domain xml...
	I0729 18:44:24.426323  136235 main.go:141] libmachine: (cert-options-899685) Creating domain...
	I0729 18:44:24.752152  136235 main.go:141] libmachine: (cert-options-899685) Waiting to get IP...
	I0729 18:44:24.753112  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:24.753509  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:24.753544  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:24.753484  136258 retry.go:31] will retry after 207.106358ms: waiting for machine to come up
	I0729 18:44:24.961906  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:24.962383  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:24.962402  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:24.962339  136258 retry.go:31] will retry after 357.687747ms: waiting for machine to come up
	I0729 18:44:25.321844  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:25.322325  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:25.322345  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:25.322276  136258 retry.go:31] will retry after 385.995333ms: waiting for machine to come up
	I0729 18:44:25.709980  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:25.710443  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:25.710473  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:25.710390  136258 retry.go:31] will retry after 502.221316ms: waiting for machine to come up
	I0729 18:44:26.213947  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:26.214399  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:26.214421  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:26.214345  136258 retry.go:31] will retry after 575.813211ms: waiting for machine to come up
	I0729 18:44:26.792300  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:26.792961  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:26.792984  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:26.792908  136258 retry.go:31] will retry after 932.379992ms: waiting for machine to come up
	I0729 18:44:27.726468  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:27.726988  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:27.727047  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:27.726953  136258 retry.go:31] will retry after 940.345986ms: waiting for machine to come up
	I0729 18:44:28.668378  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:28.668842  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:28.668871  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:28.668799  136258 retry.go:31] will retry after 1.351702492s: waiting for machine to come up
	I0729 18:44:24.546638  135833 pod_ready.go:102] pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace has status "Ready":"False"
	I0729 18:44:26.549884  135833 pod_ready.go:102] pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace has status "Ready":"False"
	I0729 18:44:28.544934  135833 pod_ready.go:92] pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:28.544956  135833 pod_ready.go:81] duration metric: took 6.006787981s for pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:28.544965  135833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:30.021722  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:30.022220  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:30.022236  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:30.022178  136258 retry.go:31] will retry after 1.561639036s: waiting for machine to come up
	I0729 18:44:31.585049  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:31.585484  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:31.585508  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:31.585436  136258 retry.go:31] will retry after 1.864425608s: waiting for machine to come up
	I0729 18:44:33.452501  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:33.452971  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:33.453014  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:33.452933  136258 retry.go:31] will retry after 2.828025352s: waiting for machine to come up
	I0729 18:44:30.551631  135833 pod_ready.go:102] pod "etcd-pause-134415" in "kube-system" namespace has status "Ready":"False"
	I0729 18:44:32.552435  135833 pod_ready.go:102] pod "etcd-pause-134415" in "kube-system" namespace has status "Ready":"False"
	I0729 18:44:34.554612  135833 pod_ready.go:102] pod "etcd-pause-134415" in "kube-system" namespace has status "Ready":"False"
	I0729 18:44:37.052265  135833 pod_ready.go:92] pod "etcd-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.052290  135833 pod_ready.go:81] duration metric: took 8.507318327s for pod "etcd-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.052302  135833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.056170  135833 pod_ready.go:92] pod "kube-apiserver-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.056191  135833 pod_ready.go:81] duration metric: took 3.882713ms for pod "kube-apiserver-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.056204  135833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.060036  135833 pod_ready.go:92] pod "kube-controller-manager-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.060054  135833 pod_ready.go:81] duration metric: took 3.84252ms for pod "kube-controller-manager-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.060064  135833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sm2kx" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.063842  135833 pod_ready.go:92] pod "kube-proxy-sm2kx" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.063860  135833 pod_ready.go:81] duration metric: took 3.788243ms for pod "kube-proxy-sm2kx" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.063879  135833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.068180  135833 pod_ready.go:92] pod "kube-scheduler-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.068198  135833 pod_ready.go:81] duration metric: took 4.311284ms for pod "kube-scheduler-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.068206  135833 pod_ready.go:38] duration metric: took 14.534300931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:44:37.068227  135833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:44:37.080552  135833 ops.go:34] apiserver oom_adj: -16
	I0729 18:44:37.080567  135833 kubeadm.go:597] duration metric: took 22.009591792s to restartPrimaryControlPlane
	I0729 18:44:37.080575  135833 kubeadm.go:394] duration metric: took 22.290017715s to StartCluster
	I0729 18:44:37.080595  135833 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:44:37.080681  135833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:44:37.081459  135833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:44:37.081692  135833 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.77 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:44:37.081747  135833 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:44:37.081915  135833 config.go:182] Loaded profile config "pause-134415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:37.083310  135833 out.go:177] * Enabled addons: 
	I0729 18:44:37.083313  135833 out.go:177] * Verifying Kubernetes components...
	I0729 18:44:36.282343  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:36.282695  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:36.282714  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:36.282662  136258 retry.go:31] will retry after 2.607564531s: waiting for machine to come up
	I0729 18:44:37.084476  135833 addons.go:510] duration metric: took 2.728493ms for enable addons: enabled=[]
	I0729 18:44:37.084539  135833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:44:37.243344  135833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:44:37.258077  135833 node_ready.go:35] waiting up to 6m0s for node "pause-134415" to be "Ready" ...
	I0729 18:44:37.261634  135833 node_ready.go:49] node "pause-134415" has status "Ready":"True"
	I0729 18:44:37.261657  135833 node_ready.go:38] duration metric: took 3.551951ms for node "pause-134415" to be "Ready" ...
	I0729 18:44:37.261668  135833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:44:37.451728  135833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.849946  135833 pod_ready.go:92] pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.849976  135833 pod_ready.go:81] duration metric: took 398.218174ms for pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.849986  135833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:38.250731  135833 pod_ready.go:92] pod "etcd-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:38.250758  135833 pod_ready.go:81] duration metric: took 400.764385ms for pod "etcd-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:38.250773  135833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:38.649367  135833 pod_ready.go:92] pod "kube-apiserver-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:38.649390  135833 pod_ready.go:81] duration metric: took 398.609477ms for pod "kube-apiserver-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:38.649400  135833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.048595  135833 pod_ready.go:92] pod "kube-controller-manager-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:39.048618  135833 pod_ready.go:81] duration metric: took 399.211804ms for pod "kube-controller-manager-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.048630  135833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sm2kx" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.448147  135833 pod_ready.go:92] pod "kube-proxy-sm2kx" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:39.448169  135833 pod_ready.go:81] duration metric: took 399.533067ms for pod "kube-proxy-sm2kx" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.448181  135833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.848426  135833 pod_ready.go:92] pod "kube-scheduler-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:39.848449  135833 pod_ready.go:81] duration metric: took 400.262206ms for pod "kube-scheduler-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.848457  135833 pod_ready.go:38] duration metric: took 2.586777927s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:44:39.848475  135833 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:44:39.848526  135833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:44:39.863102  135833 api_server.go:72] duration metric: took 2.781378113s to wait for apiserver process to appear ...
	I0729 18:44:39.863126  135833 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:44:39.863142  135833 api_server.go:253] Checking apiserver healthz at https://192.168.61.77:8443/healthz ...
	I0729 18:44:39.868895  135833 api_server.go:279] https://192.168.61.77:8443/healthz returned 200:
	ok
	I0729 18:44:39.870400  135833 api_server.go:141] control plane version: v1.30.3
	I0729 18:44:39.870420  135833 api_server.go:131] duration metric: took 7.288369ms to wait for apiserver health ...
	I0729 18:44:39.870428  135833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:44:40.050674  135833 system_pods.go:59] 6 kube-system pods found
	I0729 18:44:40.050703  135833 system_pods.go:61] "coredns-7db6d8ff4d-g6tp9" [769ed268-b082-415e-b416-a14e68a0084f] Running
	I0729 18:44:40.050708  135833 system_pods.go:61] "etcd-pause-134415" [281750af-8362-4638-b0db-01f5f69bcd38] Running
	I0729 18:44:40.050711  135833 system_pods.go:61] "kube-apiserver-pause-134415" [8a19558c-02cd-472d-a10b-255ad2a3dc66] Running
	I0729 18:44:40.050715  135833 system_pods.go:61] "kube-controller-manager-pause-134415" [b40ab25f-2497-4fda-8e23-5e52e1150e55] Running
	I0729 18:44:40.050718  135833 system_pods.go:61] "kube-proxy-sm2kx" [6fa9e9f4-9f39-41f5-9d79-4f394201011f] Running
	I0729 18:44:40.050721  135833 system_pods.go:61] "kube-scheduler-pause-134415" [4e7d5033-71d1-4921-af2f-37f868cc0896] Running
	I0729 18:44:40.050728  135833 system_pods.go:74] duration metric: took 180.293904ms to wait for pod list to return data ...
	I0729 18:44:40.050737  135833 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:44:40.249885  135833 default_sa.go:45] found service account: "default"
	I0729 18:44:40.249913  135833 default_sa.go:55] duration metric: took 199.168372ms for default service account to be created ...
	I0729 18:44:40.249924  135833 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:44:40.451395  135833 system_pods.go:86] 6 kube-system pods found
	I0729 18:44:40.451422  135833 system_pods.go:89] "coredns-7db6d8ff4d-g6tp9" [769ed268-b082-415e-b416-a14e68a0084f] Running
	I0729 18:44:40.451427  135833 system_pods.go:89] "etcd-pause-134415" [281750af-8362-4638-b0db-01f5f69bcd38] Running
	I0729 18:44:40.451431  135833 system_pods.go:89] "kube-apiserver-pause-134415" [8a19558c-02cd-472d-a10b-255ad2a3dc66] Running
	I0729 18:44:40.451435  135833 system_pods.go:89] "kube-controller-manager-pause-134415" [b40ab25f-2497-4fda-8e23-5e52e1150e55] Running
	I0729 18:44:40.451441  135833 system_pods.go:89] "kube-proxy-sm2kx" [6fa9e9f4-9f39-41f5-9d79-4f394201011f] Running
	I0729 18:44:40.451445  135833 system_pods.go:89] "kube-scheduler-pause-134415" [4e7d5033-71d1-4921-af2f-37f868cc0896] Running
	I0729 18:44:40.451451  135833 system_pods.go:126] duration metric: took 201.522228ms to wait for k8s-apps to be running ...
	I0729 18:44:40.451461  135833 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:44:40.451516  135833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:44:40.466242  135833 system_svc.go:56] duration metric: took 14.771743ms WaitForService to wait for kubelet
	I0729 18:44:40.466273  135833 kubeadm.go:582] duration metric: took 3.384552318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:44:40.466296  135833 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:44:40.648575  135833 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:44:40.648604  135833 node_conditions.go:123] node cpu capacity is 2
	I0729 18:44:40.648618  135833 node_conditions.go:105] duration metric: took 182.31716ms to run NodePressure ...
	I0729 18:44:40.648630  135833 start.go:241] waiting for startup goroutines ...
	I0729 18:44:40.648636  135833 start.go:246] waiting for cluster config update ...
	I0729 18:44:40.648643  135833 start.go:255] writing updated cluster config ...
	I0729 18:44:40.648921  135833 ssh_runner.go:195] Run: rm -f paused
	I0729 18:44:40.696696  135833 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:44:40.698642  135833 out.go:177] * Done! kubectl is now configured to use "pause-134415" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.356371996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278681356341615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea683e6a-d417-44ba-9770-2c5004ffbb49 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.357069559Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c2c7a41-e8d9-4da6-b185-91ebd491f9ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.357310084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c2c7a41-e8d9-4da6-b185-91ebd491f9ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.357668254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b66ff6521b908bcec86efa40985e9c08b511e7d95699d4ed139720453d5061b0,PodSandboxId:ef6ff6ce1302b3edb507ec1f7f6c85daafef2b1d8f75b716be2e7155c05dbd9a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278661394605298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8735cf29fbe7ee72a648e3e0732fa8e64900af4331b75aeadd0f649f1fd34d,PodSandboxId:d6726e028ad3bf754d8c2d5c5e881e126f791f83b44769b4759d1350fcac27f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278661380885687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f442242c7d7c603e6f2b1211eefa28e15b68aff15d73335b267e55e0adde5a3,PodSandboxId:92f73b06af82533868360d6850309d3bb73346f417cb3553fd49d40b2668e985,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278657544061766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbd6a3a4556fbbe7dabed9875ca38f31a24db9897cfc76e781f41a6446420a2,PodSandboxId:92a0282e7c557cf3c0fcce3aa2abf883e895bb8c99697d6933284596e015e448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278657566043011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7585830f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f087baee3eebf09602b5efdfddd38c12456421d72eaba77ea9ff2ec9ceadc0bc,PodSandboxId:67f46f43207ec552846f5023fccdc6a57a5ca9d2a69c7506f2e13dfec4552a16,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278657570408829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernete
s.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e61889f641f5a09c6a334e7d94e7e05adbe55d8bbd3c04cb1dfad91bf26b67a,PodSandboxId:61be98a20264283a28f002672dbfd5add6b063a7004449b35be68226c103c25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278657552913010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff,PodSandboxId:59230aff02baeb35dea12224df4120ac6e8475d895b4528515ff4d0ffe2c949a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278652675737170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]st
ring{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd,PodSandboxId:9a05fc85efe4c259433f1431b02dc8e2bc4c2065271649291c91d3ce6d87767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278652666715695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernetes.
container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308,PodSandboxId:7764942ab46adf2d3699dd1759ad36ba65545efad294b30b60d454a0222ea17f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278652604629075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 758
5830f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f,PodSandboxId:a51f9d4338cfb90cbdfcd2dbc9a3e4f9dbf733b60d7a2b4ef8e6ff461387f1b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278652562880547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annotations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a,PodSandboxId:e5f0aabc65bd5caa25962dab8f4ad095d8b45060a17323b5c3e26da47c54b824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278652168250755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88,PodSandboxId:ab0446dbd9dbbf2a1944c4f41f4b574ecc97916dd2c6cd62c809094edf404a78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278626319402914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c2c7a41-e8d9-4da6-b185-91ebd491f9ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.402615821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0338e9d-3864-44e8-9c22-de6dfede532b name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.402710442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0338e9d-3864-44e8-9c22-de6dfede532b name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.403788400Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=750b6a5d-ddc0-41a6-9367-30df42e92250 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.404159720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278681404138619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=750b6a5d-ddc0-41a6-9367-30df42e92250 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.404792195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8128437-c048-488a-87b5-61f20f7289d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.404865589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8128437-c048-488a-87b5-61f20f7289d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.405097925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b66ff6521b908bcec86efa40985e9c08b511e7d95699d4ed139720453d5061b0,PodSandboxId:ef6ff6ce1302b3edb507ec1f7f6c85daafef2b1d8f75b716be2e7155c05dbd9a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278661394605298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8735cf29fbe7ee72a648e3e0732fa8e64900af4331b75aeadd0f649f1fd34d,PodSandboxId:d6726e028ad3bf754d8c2d5c5e881e126f791f83b44769b4759d1350fcac27f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278661380885687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f442242c7d7c603e6f2b1211eefa28e15b68aff15d73335b267e55e0adde5a3,PodSandboxId:92f73b06af82533868360d6850309d3bb73346f417cb3553fd49d40b2668e985,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278657544061766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbd6a3a4556fbbe7dabed9875ca38f31a24db9897cfc76e781f41a6446420a2,PodSandboxId:92a0282e7c557cf3c0fcce3aa2abf883e895bb8c99697d6933284596e015e448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278657566043011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7585830f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f087baee3eebf09602b5efdfddd38c12456421d72eaba77ea9ff2ec9ceadc0bc,PodSandboxId:67f46f43207ec552846f5023fccdc6a57a5ca9d2a69c7506f2e13dfec4552a16,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278657570408829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernete
s.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e61889f641f5a09c6a334e7d94e7e05adbe55d8bbd3c04cb1dfad91bf26b67a,PodSandboxId:61be98a20264283a28f002672dbfd5add6b063a7004449b35be68226c103c25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278657552913010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff,PodSandboxId:59230aff02baeb35dea12224df4120ac6e8475d895b4528515ff4d0ffe2c949a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278652675737170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]st
ring{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd,PodSandboxId:9a05fc85efe4c259433f1431b02dc8e2bc4c2065271649291c91d3ce6d87767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278652666715695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernetes.
container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308,PodSandboxId:7764942ab46adf2d3699dd1759ad36ba65545efad294b30b60d454a0222ea17f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278652604629075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 758
5830f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f,PodSandboxId:a51f9d4338cfb90cbdfcd2dbc9a3e4f9dbf733b60d7a2b4ef8e6ff461387f1b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278652562880547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annotations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a,PodSandboxId:e5f0aabc65bd5caa25962dab8f4ad095d8b45060a17323b5c3e26da47c54b824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278652168250755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88,PodSandboxId:ab0446dbd9dbbf2a1944c4f41f4b574ecc97916dd2c6cd62c809094edf404a78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278626319402914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8128437-c048-488a-87b5-61f20f7289d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.446511601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e69432af-5e3a-4e16-889a-d8e8df431b9f name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.446590658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e69432af-5e3a-4e16-889a-d8e8df431b9f name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.448177853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d29fb61e-650c-47af-9033-ca9b00ef9d3e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.448781992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278681448759992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d29fb61e-650c-47af-9033-ca9b00ef9d3e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.449228692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b39aa3e-7c1e-46bc-94a1-df61f46f852f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.449281441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b39aa3e-7c1e-46bc-94a1-df61f46f852f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.449702945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b66ff6521b908bcec86efa40985e9c08b511e7d95699d4ed139720453d5061b0,PodSandboxId:ef6ff6ce1302b3edb507ec1f7f6c85daafef2b1d8f75b716be2e7155c05dbd9a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278661394605298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8735cf29fbe7ee72a648e3e0732fa8e64900af4331b75aeadd0f649f1fd34d,PodSandboxId:d6726e028ad3bf754d8c2d5c5e881e126f791f83b44769b4759d1350fcac27f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278661380885687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f442242c7d7c603e6f2b1211eefa28e15b68aff15d73335b267e55e0adde5a3,PodSandboxId:92f73b06af82533868360d6850309d3bb73346f417cb3553fd49d40b2668e985,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278657544061766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbd6a3a4556fbbe7dabed9875ca38f31a24db9897cfc76e781f41a6446420a2,PodSandboxId:92a0282e7c557cf3c0fcce3aa2abf883e895bb8c99697d6933284596e015e448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278657566043011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7585830f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f087baee3eebf09602b5efdfddd38c12456421d72eaba77ea9ff2ec9ceadc0bc,PodSandboxId:67f46f43207ec552846f5023fccdc6a57a5ca9d2a69c7506f2e13dfec4552a16,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278657570408829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernete
s.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e61889f641f5a09c6a334e7d94e7e05adbe55d8bbd3c04cb1dfad91bf26b67a,PodSandboxId:61be98a20264283a28f002672dbfd5add6b063a7004449b35be68226c103c25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278657552913010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff,PodSandboxId:59230aff02baeb35dea12224df4120ac6e8475d895b4528515ff4d0ffe2c949a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278652675737170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]st
ring{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd,PodSandboxId:9a05fc85efe4c259433f1431b02dc8e2bc4c2065271649291c91d3ce6d87767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278652666715695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernetes.
container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308,PodSandboxId:7764942ab46adf2d3699dd1759ad36ba65545efad294b30b60d454a0222ea17f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278652604629075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 758
5830f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f,PodSandboxId:a51f9d4338cfb90cbdfcd2dbc9a3e4f9dbf733b60d7a2b4ef8e6ff461387f1b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278652562880547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annotations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a,PodSandboxId:e5f0aabc65bd5caa25962dab8f4ad095d8b45060a17323b5c3e26da47c54b824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278652168250755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88,PodSandboxId:ab0446dbd9dbbf2a1944c4f41f4b574ecc97916dd2c6cd62c809094edf404a78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278626319402914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b39aa3e-7c1e-46bc-94a1-df61f46f852f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.495415815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6cf6d370-9bbb-46fd-a517-8e4a7b2df46f name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.495550485Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cf6d370-9bbb-46fd-a517-8e4a7b2df46f name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.496971005Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27437daa-e39e-4f24-86d8-729d9e823363 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.497313241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278681497292671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27437daa-e39e-4f24-86d8-729d9e823363 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.498049891Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed9fc1f2-c5e1-4569-afd1-61e8d07ee533 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.498139057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed9fc1f2-c5e1-4569-afd1-61e8d07ee533 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:41 pause-134415 crio[2761]: time="2024-07-29 18:44:41.498394451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b66ff6521b908bcec86efa40985e9c08b511e7d95699d4ed139720453d5061b0,PodSandboxId:ef6ff6ce1302b3edb507ec1f7f6c85daafef2b1d8f75b716be2e7155c05dbd9a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278661394605298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8735cf29fbe7ee72a648e3e0732fa8e64900af4331b75aeadd0f649f1fd34d,PodSandboxId:d6726e028ad3bf754d8c2d5c5e881e126f791f83b44769b4759d1350fcac27f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278661380885687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f442242c7d7c603e6f2b1211eefa28e15b68aff15d73335b267e55e0adde5a3,PodSandboxId:92f73b06af82533868360d6850309d3bb73346f417cb3553fd49d40b2668e985,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278657544061766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbd6a3a4556fbbe7dabed9875ca38f31a24db9897cfc76e781f41a6446420a2,PodSandboxId:92a0282e7c557cf3c0fcce3aa2abf883e895bb8c99697d6933284596e015e448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278657566043011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7585830f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f087baee3eebf09602b5efdfddd38c12456421d72eaba77ea9ff2ec9ceadc0bc,PodSandboxId:67f46f43207ec552846f5023fccdc6a57a5ca9d2a69c7506f2e13dfec4552a16,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278657570408829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernete
s.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e61889f641f5a09c6a334e7d94e7e05adbe55d8bbd3c04cb1dfad91bf26b67a,PodSandboxId:61be98a20264283a28f002672dbfd5add6b063a7004449b35be68226c103c25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278657552913010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff,PodSandboxId:59230aff02baeb35dea12224df4120ac6e8475d895b4528515ff4d0ffe2c949a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278652675737170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]st
ring{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd,PodSandboxId:9a05fc85efe4c259433f1431b02dc8e2bc4c2065271649291c91d3ce6d87767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278652666715695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernetes.
container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308,PodSandboxId:7764942ab46adf2d3699dd1759ad36ba65545efad294b30b60d454a0222ea17f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278652604629075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 758
5830f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f,PodSandboxId:a51f9d4338cfb90cbdfcd2dbc9a3e4f9dbf733b60d7a2b4ef8e6ff461387f1b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278652562880547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annotations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a,PodSandboxId:e5f0aabc65bd5caa25962dab8f4ad095d8b45060a17323b5c3e26da47c54b824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278652168250755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88,PodSandboxId:ab0446dbd9dbbf2a1944c4f41f4b574ecc97916dd2c6cd62c809094edf404a78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278626319402914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed9fc1f2-c5e1-4569-afd1-61e8d07ee533 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b66ff6521b908       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Running             coredns                   1                   ef6ff6ce1302b       coredns-7db6d8ff4d-g6tp9
	0e8735cf29fbe       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   20 seconds ago      Running             kube-proxy                2                   d6726e028ad3b       kube-proxy-sm2kx
	f087baee3eebf       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   23 seconds ago      Running             kube-scheduler            2                   67f46f43207ec       kube-scheduler-pause-134415
	4bbd6a3a4556f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   24 seconds ago      Running             kube-apiserver            2                   92a0282e7c557       kube-apiserver-pause-134415
	1e61889f641f5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   24 seconds ago      Running             kube-controller-manager   2                   61be98a202642       kube-controller-manager-pause-134415
	9f442242c7d7c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   92f73b06af825       etcd-pause-134415
	68ff3286f312a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   28 seconds ago      Exited              kube-controller-manager   1                   59230aff02bae       kube-controller-manager-pause-134415
	e4cea604b117c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   28 seconds ago      Exited              kube-scheduler            1                   9a05fc85efe4c       kube-scheduler-pause-134415
	d389fc1bf4eeb       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   28 seconds ago      Exited              kube-apiserver            1                   7764942ab46ad       kube-apiserver-pause-134415
	7844ba6af4b03       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   29 seconds ago      Exited              etcd                      1                   a51f9d4338cfb       etcd-pause-134415
	4dab08bb9b264       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   29 seconds ago      Exited              kube-proxy                1                   e5f0aabc65bd5       kube-proxy-sm2kx
	e50cff59ddb5d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   55 seconds ago      Exited              coredns                   0                   ab0446dbd9dbb       coredns-7db6d8ff4d-g6tp9
	
	
	==> coredns [b66ff6521b908bcec86efa40985e9c08b511e7d95699d4ed139720453d5061b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35449 - 49410 "HINFO IN 4671401912247073721.1201125565082427322. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009955718s
	
	
	==> coredns [e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58158 - 1188 "HINFO IN 7325360415050638714.4383664452241786804. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017025551s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1557520512]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:43:46.526) (total time: 16967ms):
	Trace[1557520512]: [16.96727735s] [16.96727735s] END
	[INFO] plugin/kubernetes: Trace[808941779]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:43:46.525) (total time: 16969ms):
	Trace[808941779]: [16.969201841s] [16.969201841s] END
	[INFO] plugin/kubernetes: Trace[1185478479]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:43:46.525) (total time: 16969ms):
	Trace[1185478479]: [16.969309099s] [16.969309099s] END
	
	
	==> describe nodes <==
	Name:               pause-134415
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-134415
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=pause-134415
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_43_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:43:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-134415
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:44:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:44:20 +0000   Mon, 29 Jul 2024 18:43:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:44:20 +0000   Mon, 29 Jul 2024 18:43:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:44:20 +0000   Mon, 29 Jul 2024 18:43:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:44:20 +0000   Mon, 29 Jul 2024 18:43:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.77
	  Hostname:    pause-134415
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 19cbbd61a6e5474cb723ecddf35bc51b
	  System UUID:                19cbbd61-a6e5-474c-b723-ecddf35bc51b
	  Boot ID:                    9715c901-bdc1-40d4-acda-0a539b9ea554
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-g6tp9                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     56s
	  kube-system                 etcd-pause-134415                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-pause-134415             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-pause-134415    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-sm2kx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-pause-134415             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node pause-134415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node pause-134415 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node pause-134415 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    72s                kubelet          Node pause-134415 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  72s                kubelet          Node pause-134415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     72s                kubelet          Node pause-134415 status is now: NodeHasSufficientPID
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeReady                71s                kubelet          Node pause-134415 status is now: NodeReady
	  Normal  RegisteredNode           59s                node-controller  Node pause-134415 event: Registered Node pause-134415 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-134415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-134415 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-134415 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-134415 event: Registered Node pause-134415 in Controller
	
	
	==> dmesg <==
	[  +8.828988] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.063536] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070085] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.187204] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.112096] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.271387] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.467235] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.074226] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.146178] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.083024] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.012367] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.075923] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.214377] systemd-fstab-generator[1485]: Ignoring "noauto" option for root device
	[  +0.078992] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 18:44] systemd-fstab-generator[2128]: Ignoring "noauto" option for root device
	[  +0.047387] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.125339] systemd-fstab-generator[2176]: Ignoring "noauto" option for root device
	[  +0.266792] systemd-fstab-generator[2224]: Ignoring "noauto" option for root device
	[  +0.295570] systemd-fstab-generator[2351]: Ignoring "noauto" option for root device
	[  +0.744390] systemd-fstab-generator[2641]: Ignoring "noauto" option for root device
	[  +1.032543] systemd-fstab-generator[2916]: Ignoring "noauto" option for root device
	[  +2.617927] systemd-fstab-generator[3349]: Ignoring "noauto" option for root device
	[  +0.073023] kauditd_printk_skb: 238 callbacks suppressed
	[ +16.199637] kauditd_printk_skb: 49 callbacks suppressed
	[  +4.007503] systemd-fstab-generator[3770]: Ignoring "noauto" option for root device
	
	
	==> etcd [7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f] <==
	{"level":"info","ts":"2024-07-29T18:44:13.014303Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"18.600101ms"}
	{"level":"info","ts":"2024-07-29T18:44:13.079358Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T18:44:13.159964Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"8c1b28ae7d8b253d","local-member-id":"8989ab7f8b274152","commit-index":431}
	{"level":"info","ts":"2024-07-29T18:44:13.16008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T18:44:13.160135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 became follower at term 2"}
	{"level":"info","ts":"2024-07-29T18:44:13.160147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8989ab7f8b274152 [peers: [], term: 2, commit: 431, applied: 0, lastindex: 431, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T18:44:13.190611Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T18:44:13.228997Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":415}
	{"level":"info","ts":"2024-07-29T18:44:13.258815Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T18:44:13.265512Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8989ab7f8b274152","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:44:13.269724Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8989ab7f8b274152"}
	{"level":"info","ts":"2024-07-29T18:44:13.269983Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8989ab7f8b274152","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T18:44:13.27065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 switched to configuration voters=(9910641019289289042)"}
	{"level":"info","ts":"2024-07-29T18:44:13.273936Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8c1b28ae7d8b253d","local-member-id":"8989ab7f8b274152","added-peer-id":"8989ab7f8b274152","added-peer-peer-urls":["https://192.168.61.77:2380"]}
	{"level":"info","ts":"2024-07-29T18:44:13.274611Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8c1b28ae7d8b253d","local-member-id":"8989ab7f8b274152","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:44:13.277775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:44:13.280939Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8989ab7f8b274152","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-29T18:44:13.284737Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:44:13.319336Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:44:13.323189Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:44:13.337097Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.77:2380"}
	{"level":"info","ts":"2024-07-29T18:44:13.33757Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.77:2380"}
	{"level":"info","ts":"2024-07-29T18:44:13.332414Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T18:44:13.343731Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8989ab7f8b274152","initial-advertise-peer-urls":["https://192.168.61.77:2380"],"listen-peer-urls":["https://192.168.61.77:2380"],"advertise-client-urls":["https://192.168.61.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T18:44:13.343763Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [9f442242c7d7c603e6f2b1211eefa28e15b68aff15d73335b267e55e0adde5a3] <==
	{"level":"info","ts":"2024-07-29T18:44:18.158691Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:44:18.158726Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:44:18.160687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 switched to configuration voters=(9910641019289289042)"}
	{"level":"info","ts":"2024-07-29T18:44:18.163549Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8c1b28ae7d8b253d","local-member-id":"8989ab7f8b274152","added-peer-id":"8989ab7f8b274152","added-peer-peer-urls":["https://192.168.61.77:2380"]}
	{"level":"info","ts":"2024-07-29T18:44:18.1637Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8c1b28ae7d8b253d","local-member-id":"8989ab7f8b274152","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:44:18.163768Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:44:18.16794Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T18:44:18.168137Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8989ab7f8b274152","initial-advertise-peer-urls":["https://192.168.61.77:2380"],"listen-peer-urls":["https://192.168.61.77:2380"],"advertise-client-urls":["https://192.168.61.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T18:44:18.168183Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T18:44:18.168288Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.77:2380"}
	{"level":"info","ts":"2024-07-29T18:44:18.168312Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.77:2380"}
	{"level":"info","ts":"2024-07-29T18:44:19.272139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T18:44:19.272242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T18:44:19.272279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 received MsgPreVoteResp from 8989ab7f8b274152 at term 2"}
	{"level":"info","ts":"2024-07-29T18:44:19.272309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T18:44:19.272333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 received MsgVoteResp from 8989ab7f8b274152 at term 3"}
	{"level":"info","ts":"2024-07-29T18:44:19.27236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T18:44:19.272385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8989ab7f8b274152 elected leader 8989ab7f8b274152 at term 3"}
	{"level":"info","ts":"2024-07-29T18:44:19.279628Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:44:19.279897Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:44:19.279627Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8989ab7f8b274152","local-member-attributes":"{Name:pause-134415 ClientURLs:[https://192.168.61.77:2379]}","request-path":"/0/members/8989ab7f8b274152/attributes","cluster-id":"8c1b28ae7d8b253d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:44:19.280169Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:44:19.280196Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T18:44:19.28181Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.77:2379"}
	{"level":"info","ts":"2024-07-29T18:44:19.281913Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:44:41 up 1 min,  0 users,  load average: 1.00, 0.38, 0.14
	Linux pause-134415 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4bbd6a3a4556fbbe7dabed9875ca38f31a24db9897cfc76e781f41a6446420a2] <==
	I0729 18:44:20.738164       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 18:44:20.738351       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 18:44:20.738381       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 18:44:20.738577       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 18:44:20.741611       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 18:44:20.744162       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 18:44:20.755762       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 18:44:20.755869       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 18:44:20.755911       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:44:20.755928       1 policy_source.go:224] refreshing policies
	E0729 18:44:20.758985       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 18:44:20.760803       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 18:44:20.767986       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 18:44:20.768050       1 aggregator.go:165] initial CRD sync complete...
	I0729 18:44:20.768091       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 18:44:20.768114       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 18:44:20.768136       1 cache.go:39] Caches are synced for autoregister controller
	I0729 18:44:21.556184       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 18:44:22.359936       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 18:44:22.371391       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 18:44:22.408741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 18:44:22.438366       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 18:44:22.445789       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 18:44:33.095390       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 18:44:33.219206       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308] <==
	I0729 18:44:13.241145       1 options.go:221] external host was not specified, using 192.168.61.77
	I0729 18:44:13.242173       1 server.go:148] Version: v1.30.3
	I0729 18:44:13.242217       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [1e61889f641f5a09c6a334e7d94e7e05adbe55d8bbd3c04cb1dfad91bf26b67a] <==
	I0729 18:44:33.087321       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 18:44:33.092581       1 shared_informer.go:320] Caches are synced for job
	I0729 18:44:33.095788       1 shared_informer.go:320] Caches are synced for deployment
	I0729 18:44:33.097810       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 18:44:33.100794       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 18:44:33.103812       1 shared_informer.go:320] Caches are synced for taint
	I0729 18:44:33.104128       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 18:44:33.104324       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-134415"
	I0729 18:44:33.104393       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 18:44:33.104967       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 18:44:33.107694       1 shared_informer.go:320] Caches are synced for GC
	I0729 18:44:33.111512       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 18:44:33.120075       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:44:33.120360       1 shared_informer.go:320] Caches are synced for HPA
	I0729 18:44:33.127818       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:44:33.147201       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 18:44:33.153018       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 18:44:33.165572       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 18:44:33.169257       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 18:44:33.170678       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 18:44:33.177306       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.948752ms"
	I0729 18:44:33.177515       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.851µs"
	I0729 18:44:33.538785       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 18:44:33.538822       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 18:44:33.572029       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff] <==
	
	
	==> kube-proxy [0e8735cf29fbe7ee72a648e3e0732fa8e64900af4331b75aeadd0f649f1fd34d] <==
	I0729 18:44:21.572388       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:44:21.589234       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.77"]
	I0729 18:44:21.660012       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:44:21.660083       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:44:21.660103       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:44:21.663214       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:44:21.663559       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:44:21.663592       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:44:21.664812       1 config.go:192] "Starting service config controller"
	I0729 18:44:21.664849       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:44:21.665342       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:44:21.665374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:44:21.665940       1 config.go:319] "Starting node config controller"
	I0729 18:44:21.665970       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:44:21.765570       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:44:21.765688       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:44:21.766211       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a] <==
	
	
	==> kube-scheduler [e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd] <==
	
	
	==> kube-scheduler [f087baee3eebf09602b5efdfddd38c12456421d72eaba77ea9ff2ec9ceadc0bc] <==
	I0729 18:44:18.605989       1 serving.go:380] Generated self-signed cert in-memory
	W0729 18:44:20.634633       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 18:44:20.634825       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:44:20.634945       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:44:20.634979       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:44:20.671700       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 18:44:20.671798       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:44:20.675669       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:44:20.675759       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 18:44:20.676346       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 18:44:20.676401       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 18:44:20.776059       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.308672    3356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd8293080c772c37780ce473c40b2740-ca-certs\") pod \"kube-controller-manager-pause-134415\" (UID: \"bd8293080c772c37780ce473c40b2740\") " pod="kube-system/kube-controller-manager-pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.308709    3356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd8293080c772c37780ce473c40b2740-kubeconfig\") pod \"kube-controller-manager-pause-134415\" (UID: \"bd8293080c772c37780ce473c40b2740\") " pod="kube-system/kube-controller-manager-pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.308726    3356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60d4a8cc0f6d4b01cb667cee784c02b9-kubeconfig\") pod \"kube-scheduler-pause-134415\" (UID: \"60d4a8cc0f6d4b01cb667cee784c02b9\") " pod="kube-system/kube-scheduler-pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.356324    3356 kubelet_node_status.go:73] "Attempting to register node" node="pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: E0729 18:44:17.357317    3356 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.77:8443: connect: connection refused" node="pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.527583    3356 scope.go:117] "RemoveContainer" containerID="68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.527834    3356 scope.go:117] "RemoveContainer" containerID="e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.528862    3356 scope.go:117] "RemoveContainer" containerID="d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.529526    3356 scope.go:117] "RemoveContainer" containerID="7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: E0729 18:44:17.657333    3356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-134415?timeout=10s\": dial tcp 192.168.61.77:8443: connect: connection refused" interval="800ms"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.759900    3356 kubelet_node_status.go:73] "Attempting to register node" node="pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: E0729 18:44:17.760976    3356 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.77:8443: connect: connection refused" node="pause-134415"
	Jul 29 18:44:18 pause-134415 kubelet[3356]: I0729 18:44:18.562557    3356 kubelet_node_status.go:73] "Attempting to register node" node="pause-134415"
	Jul 29 18:44:20 pause-134415 kubelet[3356]: I0729 18:44:20.851199    3356 kubelet_node_status.go:112] "Node was previously registered" node="pause-134415"
	Jul 29 18:44:20 pause-134415 kubelet[3356]: I0729 18:44:20.851296    3356 kubelet_node_status.go:76] "Successfully registered node" node="pause-134415"
	Jul 29 18:44:20 pause-134415 kubelet[3356]: I0729 18:44:20.853242    3356 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 18:44:20 pause-134415 kubelet[3356]: I0729 18:44:20.854294    3356 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.049673    3356 apiserver.go:52] "Watching apiserver"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.053582    3356 topology_manager.go:215] "Topology Admit Handler" podUID="6fa9e9f4-9f39-41f5-9d79-4f394201011f" podNamespace="kube-system" podName="kube-proxy-sm2kx"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.054945    3356 topology_manager.go:215] "Topology Admit Handler" podUID="769ed268-b082-415e-b416-a14e68a0084f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g6tp9"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.057106    3356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fa9e9f4-9f39-41f5-9d79-4f394201011f-xtables-lock\") pod \"kube-proxy-sm2kx\" (UID: \"6fa9e9f4-9f39-41f5-9d79-4f394201011f\") " pod="kube-system/kube-proxy-sm2kx"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.057164    3356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fa9e9f4-9f39-41f5-9d79-4f394201011f-lib-modules\") pod \"kube-proxy-sm2kx\" (UID: \"6fa9e9f4-9f39-41f5-9d79-4f394201011f\") " pod="kube-system/kube-proxy-sm2kx"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.155100    3356 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.354706    3356 scope.go:117] "RemoveContainer" containerID="4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a"
	Jul 29 18:44:28 pause-134415 kubelet[3356]: I0729 18:44:28.154324    3356 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-134415 -n pause-134415
helpers_test.go:261: (dbg) Run:  kubectl --context pause-134415 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-134415 -n pause-134415
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-134415 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-134415 logs -n 25: (1.258358156s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-695907       | kubernetes-upgrade-695907 | jenkins | v1.33.1 | 29 Jul 24 18:39 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-801126        | force-systemd-env-801126  | jenkins | v1.33.1 | 29 Jul 24 18:40 UTC | 29 Jul 24 18:40 UTC |
	| start   | -p stopped-upgrade-931829          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 18:40 UTC | 29 Jul 24 18:41 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:40 UTC | 29 Jul 24 18:41 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-778169             | offline-crio-778169       | jenkins | v1.33.1 | 29 Jul 24 18:40 UTC | 29 Jul 24 18:40 UTC |
	| start   | -p running-upgrade-459882          | minikube                  | jenkins | v1.26.0 | 29 Jul 24 18:40 UTC | 29 Jul 24 18:42 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:41 UTC | 29 Jul 24 18:41 UTC |
	| start   | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:41 UTC | 29 Jul 24 18:42 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-931829 stop        | minikube                  | jenkins | v1.26.0 | 29 Jul 24 18:41 UTC | 29 Jul 24 18:41 UTC |
	| start   | -p stopped-upgrade-931829          | stopped-upgrade-931829    | jenkins | v1.33.1 | 29 Jul 24 18:41 UTC | 29 Jul 24 18:42 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-459882          | running-upgrade-459882    | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:43 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-790573 sudo        | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:42 UTC |
	| start   | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:42 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-790573 sudo        | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-790573             | NoKubernetes-790573       | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:42 UTC |
	| start   | -p pause-134415 --memory=2048      | pause-134415              | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:43 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-931829          | stopped-upgrade-931829    | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:42 UTC |
	| start   | -p cert-expiration-974855          | cert-expiration-974855    | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC | 29 Jul 24 18:43 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-459882          | running-upgrade-459882    | jenkins | v1.33.1 | 29 Jul 24 18:43 UTC | 29 Jul 24 18:43 UTC |
	| start   | -p force-systemd-flag-729652       | force-systemd-flag-729652 | jenkins | v1.33.1 | 29 Jul 24 18:43 UTC | 29 Jul 24 18:44 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-134415                    | pause-134415              | jenkins | v1.33.1 | 29 Jul 24 18:43 UTC | 29 Jul 24 18:44 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-729652 ssh cat  | force-systemd-flag-729652 | jenkins | v1.33.1 | 29 Jul 24 18:44 UTC | 29 Jul 24 18:44 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-729652       | force-systemd-flag-729652 | jenkins | v1.33.1 | 29 Jul 24 18:44 UTC | 29 Jul 24 18:44 UTC |
	| start   | -p cert-options-899685             | cert-options-899685       | jenkins | v1.33.1 | 29 Jul 24 18:44 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:44:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:44:23.809900  136235 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:44:23.810091  136235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:44:23.810094  136235 out.go:304] Setting ErrFile to fd 2...
	I0729 18:44:23.810097  136235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:44:23.810282  136235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:44:23.810839  136235 out.go:298] Setting JSON to false
	I0729 18:44:23.811725  136235 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12384,"bootTime":1722266280,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:44:23.811772  136235 start.go:139] virtualization: kvm guest
	I0729 18:44:23.814288  136235 out.go:177] * [cert-options-899685] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:44:23.815732  136235 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:44:23.815765  136235 notify.go:220] Checking for updates...
	I0729 18:44:23.818138  136235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:44:23.819299  136235 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:44:23.820562  136235 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:44:23.821738  136235 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:44:23.823103  136235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:44:23.824769  136235 config.go:182] Loaded profile config "cert-expiration-974855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:23.824887  136235 config.go:182] Loaded profile config "kubernetes-upgrade-695907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:44:23.824994  136235 config.go:182] Loaded profile config "pause-134415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:23.825061  136235 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:44:23.859727  136235 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:44:23.860822  136235 start.go:297] selected driver: kvm2
	I0729 18:44:23.860829  136235 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:44:23.860837  136235 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:44:23.861787  136235 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:44:23.861875  136235 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:44:23.876295  136235 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:44:23.876326  136235 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:44:23.876613  136235 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 18:44:23.876634  136235 cni.go:84] Creating CNI manager for ""
	I0729 18:44:23.876648  136235 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:44:23.876657  136235 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:44:23.876711  136235 start.go:340] cluster config:
	{Name:cert-options-899685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-899685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0729 18:44:23.876830  136235 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:44:23.878496  136235 out.go:177] * Starting "cert-options-899685" primary control-plane node in "cert-options-899685" cluster
	I0729 18:44:23.879634  136235 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:44:23.879660  136235 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:44:23.879665  136235 cache.go:56] Caching tarball of preloaded images
	I0729 18:44:23.879743  136235 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:44:23.879749  136235 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:44:23.879842  136235 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/cert-options-899685/config.json ...
	I0729 18:44:23.879856  136235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/cert-options-899685/config.json: {Name:mk14233c801f43586c4a6fbf2de2f0d9abea3770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:44:23.879988  136235 start.go:360] acquireMachinesLock for cert-options-899685: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:44:23.880018  136235 start.go:364] duration metric: took 19.061µs to acquireMachinesLock for "cert-options-899685"
	I0729 18:44:23.880035  136235 start.go:93] Provisioning new machine with config: &{Name:cert-options-899685 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-899685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:44:23.880088  136235 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:44:22.186968  135833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:44:22.198438  135833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:44:22.218240  135833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:44:22.229293  135833 system_pods.go:59] 6 kube-system pods found
	I0729 18:44:22.229331  135833 system_pods.go:61] "coredns-7db6d8ff4d-g6tp9" [769ed268-b082-415e-b416-a14e68a0084f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 18:44:22.229342  135833 system_pods.go:61] "etcd-pause-134415" [281750af-8362-4638-b0db-01f5f69bcd38] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 18:44:22.229351  135833 system_pods.go:61] "kube-apiserver-pause-134415" [8a19558c-02cd-472d-a10b-255ad2a3dc66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 18:44:22.229361  135833 system_pods.go:61] "kube-controller-manager-pause-134415" [b40ab25f-2497-4fda-8e23-5e52e1150e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 18:44:22.229372  135833 system_pods.go:61] "kube-proxy-sm2kx" [6fa9e9f4-9f39-41f5-9d79-4f394201011f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 18:44:22.229387  135833 system_pods.go:61] "kube-scheduler-pause-134415" [4e7d5033-71d1-4921-af2f-37f868cc0896] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 18:44:22.229402  135833 system_pods.go:74] duration metric: took 11.138932ms to wait for pod list to return data ...
	I0729 18:44:22.229414  135833 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:44:22.233223  135833 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:44:22.233252  135833 node_conditions.go:123] node cpu capacity is 2
	I0729 18:44:22.233265  135833 node_conditions.go:105] duration metric: took 3.841523ms to run NodePressure ...
	I0729 18:44:22.233289  135833 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:44:22.528972  135833 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 18:44:22.533866  135833 kubeadm.go:739] kubelet initialised
	I0729 18:44:22.533886  135833 kubeadm.go:740] duration metric: took 4.888474ms waiting for restarted kubelet to initialise ...
	I0729 18:44:22.533896  135833 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:44:22.538147  135833 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:23.881544  136235 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 18:44:23.881674  136235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:44:23.881704  136235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:44:23.895411  136235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0729 18:44:23.895946  136235 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:44:23.896489  136235 main.go:141] libmachine: Using API Version  1
	I0729 18:44:23.896524  136235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:44:23.896919  136235 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:44:23.897138  136235 main.go:141] libmachine: (cert-options-899685) Calling .GetMachineName
	I0729 18:44:23.897283  136235 main.go:141] libmachine: (cert-options-899685) Calling .DriverName
	I0729 18:44:23.897423  136235 start.go:159] libmachine.API.Create for "cert-options-899685" (driver="kvm2")
	I0729 18:44:23.897444  136235 client.go:168] LocalClient.Create starting
	I0729 18:44:23.897471  136235 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 18:44:23.897498  136235 main.go:141] libmachine: Decoding PEM data...
	I0729 18:44:23.897511  136235 main.go:141] libmachine: Parsing certificate...
	I0729 18:44:23.897567  136235 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 18:44:23.897580  136235 main.go:141] libmachine: Decoding PEM data...
	I0729 18:44:23.897589  136235 main.go:141] libmachine: Parsing certificate...
	I0729 18:44:23.897599  136235 main.go:141] libmachine: Running pre-create checks...
	I0729 18:44:23.897605  136235 main.go:141] libmachine: (cert-options-899685) Calling .PreCreateCheck
	I0729 18:44:23.897946  136235 main.go:141] libmachine: (cert-options-899685) Calling .GetConfigRaw
	I0729 18:44:23.898298  136235 main.go:141] libmachine: Creating machine...
	I0729 18:44:23.898304  136235 main.go:141] libmachine: (cert-options-899685) Calling .Create
	I0729 18:44:23.898426  136235 main.go:141] libmachine: (cert-options-899685) Creating KVM machine...
	I0729 18:44:23.899708  136235 main.go:141] libmachine: (cert-options-899685) DBG | found existing default KVM network
	I0729 18:44:23.902264  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.902089  136258 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 18:44:23.903167  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.903098  136258 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:7b:af:8a} reservation:<nil>}
	I0729 18:44:23.903958  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.903871  136258 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:1b:5e:f1} reservation:<nil>}
	I0729 18:44:23.904634  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.904575  136258 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:78:c8:3d} reservation:<nil>}
	I0729 18:44:23.905685  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.905609  136258 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00041e750}
	I0729 18:44:23.905716  136235 main.go:141] libmachine: (cert-options-899685) DBG | created network xml: 
	I0729 18:44:23.905733  136235 main.go:141] libmachine: (cert-options-899685) DBG | <network>
	I0729 18:44:23.905741  136235 main.go:141] libmachine: (cert-options-899685) DBG |   <name>mk-cert-options-899685</name>
	I0729 18:44:23.905748  136235 main.go:141] libmachine: (cert-options-899685) DBG |   <dns enable='no'/>
	I0729 18:44:23.905760  136235 main.go:141] libmachine: (cert-options-899685) DBG |   
	I0729 18:44:23.905766  136235 main.go:141] libmachine: (cert-options-899685) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0729 18:44:23.905773  136235 main.go:141] libmachine: (cert-options-899685) DBG |     <dhcp>
	I0729 18:44:23.905780  136235 main.go:141] libmachine: (cert-options-899685) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0729 18:44:23.905786  136235 main.go:141] libmachine: (cert-options-899685) DBG |     </dhcp>
	I0729 18:44:23.905796  136235 main.go:141] libmachine: (cert-options-899685) DBG |   </ip>
	I0729 18:44:23.905811  136235 main.go:141] libmachine: (cert-options-899685) DBG |   
	I0729 18:44:23.905816  136235 main.go:141] libmachine: (cert-options-899685) DBG | </network>
	I0729 18:44:23.905827  136235 main.go:141] libmachine: (cert-options-899685) DBG | 
	I0729 18:44:23.910636  136235 main.go:141] libmachine: (cert-options-899685) DBG | trying to create private KVM network mk-cert-options-899685 192.168.83.0/24...
	I0729 18:44:23.980062  136235 main.go:141] libmachine: (cert-options-899685) DBG | private KVM network mk-cert-options-899685 192.168.83.0/24 created
	I0729 18:44:23.980175  136235 main.go:141] libmachine: (cert-options-899685) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685 ...
	I0729 18:44:23.980205  136235 main.go:141] libmachine: (cert-options-899685) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 18:44:23.980216  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:23.980169  136258 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:44:23.980334  136235 main.go:141] libmachine: (cert-options-899685) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 18:44:24.229607  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:24.229431  136258 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685/id_rsa...
	I0729 18:44:24.417430  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:24.417271  136258 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685/cert-options-899685.rawdisk...
	I0729 18:44:24.417456  136235 main.go:141] libmachine: (cert-options-899685) DBG | Writing magic tar header
	I0729 18:44:24.417473  136235 main.go:141] libmachine: (cert-options-899685) DBG | Writing SSH key tar header
	I0729 18:44:24.417484  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:24.417420  136258 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685 ...
	I0729 18:44:24.417628  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685 (perms=drwx------)
	I0729 18:44:24.417657  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:44:24.417668  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685
	I0729 18:44:24.417678  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 18:44:24.417691  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 18:44:24.417700  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:44:24.417709  136235 main.go:141] libmachine: (cert-options-899685) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:44:24.417715  136235 main.go:141] libmachine: (cert-options-899685) Creating domain...
	I0729 18:44:24.417730  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 18:44:24.417739  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:44:24.417747  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 18:44:24.417757  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:44:24.417772  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:44:24.417782  136235 main.go:141] libmachine: (cert-options-899685) DBG | Checking permissions on dir: /home
	I0729 18:44:24.417788  136235 main.go:141] libmachine: (cert-options-899685) DBG | Skipping /home - not owner
	I0729 18:44:24.418916  136235 main.go:141] libmachine: (cert-options-899685) define libvirt domain using xml: 
	I0729 18:44:24.418924  136235 main.go:141] libmachine: (cert-options-899685) <domain type='kvm'>
	I0729 18:44:24.418929  136235 main.go:141] libmachine: (cert-options-899685)   <name>cert-options-899685</name>
	I0729 18:44:24.418942  136235 main.go:141] libmachine: (cert-options-899685)   <memory unit='MiB'>2048</memory>
	I0729 18:44:24.418947  136235 main.go:141] libmachine: (cert-options-899685)   <vcpu>2</vcpu>
	I0729 18:44:24.418953  136235 main.go:141] libmachine: (cert-options-899685)   <features>
	I0729 18:44:24.418957  136235 main.go:141] libmachine: (cert-options-899685)     <acpi/>
	I0729 18:44:24.418961  136235 main.go:141] libmachine: (cert-options-899685)     <apic/>
	I0729 18:44:24.418965  136235 main.go:141] libmachine: (cert-options-899685)     <pae/>
	I0729 18:44:24.418968  136235 main.go:141] libmachine: (cert-options-899685)     
	I0729 18:44:24.418972  136235 main.go:141] libmachine: (cert-options-899685)   </features>
	I0729 18:44:24.418976  136235 main.go:141] libmachine: (cert-options-899685)   <cpu mode='host-passthrough'>
	I0729 18:44:24.418980  136235 main.go:141] libmachine: (cert-options-899685)   
	I0729 18:44:24.418983  136235 main.go:141] libmachine: (cert-options-899685)   </cpu>
	I0729 18:44:24.418987  136235 main.go:141] libmachine: (cert-options-899685)   <os>
	I0729 18:44:24.418990  136235 main.go:141] libmachine: (cert-options-899685)     <type>hvm</type>
	I0729 18:44:24.419002  136235 main.go:141] libmachine: (cert-options-899685)     <boot dev='cdrom'/>
	I0729 18:44:24.419011  136235 main.go:141] libmachine: (cert-options-899685)     <boot dev='hd'/>
	I0729 18:44:24.419018  136235 main.go:141] libmachine: (cert-options-899685)     <bootmenu enable='no'/>
	I0729 18:44:24.419024  136235 main.go:141] libmachine: (cert-options-899685)   </os>
	I0729 18:44:24.419030  136235 main.go:141] libmachine: (cert-options-899685)   <devices>
	I0729 18:44:24.419038  136235 main.go:141] libmachine: (cert-options-899685)     <disk type='file' device='cdrom'>
	I0729 18:44:24.419050  136235 main.go:141] libmachine: (cert-options-899685)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685/boot2docker.iso'/>
	I0729 18:44:24.419054  136235 main.go:141] libmachine: (cert-options-899685)       <target dev='hdc' bus='scsi'/>
	I0729 18:44:24.419059  136235 main.go:141] libmachine: (cert-options-899685)       <readonly/>
	I0729 18:44:24.419062  136235 main.go:141] libmachine: (cert-options-899685)     </disk>
	I0729 18:44:24.419067  136235 main.go:141] libmachine: (cert-options-899685)     <disk type='file' device='disk'>
	I0729 18:44:24.419072  136235 main.go:141] libmachine: (cert-options-899685)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:44:24.419079  136235 main.go:141] libmachine: (cert-options-899685)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/cert-options-899685/cert-options-899685.rawdisk'/>
	I0729 18:44:24.419082  136235 main.go:141] libmachine: (cert-options-899685)       <target dev='hda' bus='virtio'/>
	I0729 18:44:24.419086  136235 main.go:141] libmachine: (cert-options-899685)     </disk>
	I0729 18:44:24.419090  136235 main.go:141] libmachine: (cert-options-899685)     <interface type='network'>
	I0729 18:44:24.419095  136235 main.go:141] libmachine: (cert-options-899685)       <source network='mk-cert-options-899685'/>
	I0729 18:44:24.419102  136235 main.go:141] libmachine: (cert-options-899685)       <model type='virtio'/>
	I0729 18:44:24.419109  136235 main.go:141] libmachine: (cert-options-899685)     </interface>
	I0729 18:44:24.419117  136235 main.go:141] libmachine: (cert-options-899685)     <interface type='network'>
	I0729 18:44:24.419125  136235 main.go:141] libmachine: (cert-options-899685)       <source network='default'/>
	I0729 18:44:24.419131  136235 main.go:141] libmachine: (cert-options-899685)       <model type='virtio'/>
	I0729 18:44:24.419137  136235 main.go:141] libmachine: (cert-options-899685)     </interface>
	I0729 18:44:24.419140  136235 main.go:141] libmachine: (cert-options-899685)     <serial type='pty'>
	I0729 18:44:24.419145  136235 main.go:141] libmachine: (cert-options-899685)       <target port='0'/>
	I0729 18:44:24.419147  136235 main.go:141] libmachine: (cert-options-899685)     </serial>
	I0729 18:44:24.419154  136235 main.go:141] libmachine: (cert-options-899685)     <console type='pty'>
	I0729 18:44:24.419158  136235 main.go:141] libmachine: (cert-options-899685)       <target type='serial' port='0'/>
	I0729 18:44:24.419162  136235 main.go:141] libmachine: (cert-options-899685)     </console>
	I0729 18:44:24.419165  136235 main.go:141] libmachine: (cert-options-899685)     <rng model='virtio'>
	I0729 18:44:24.419180  136235 main.go:141] libmachine: (cert-options-899685)       <backend model='random'>/dev/random</backend>
	I0729 18:44:24.419186  136235 main.go:141] libmachine: (cert-options-899685)     </rng>
	I0729 18:44:24.419204  136235 main.go:141] libmachine: (cert-options-899685)     
	I0729 18:44:24.419215  136235 main.go:141] libmachine: (cert-options-899685)     
	I0729 18:44:24.419222  136235 main.go:141] libmachine: (cert-options-899685)   </devices>
	I0729 18:44:24.419226  136235 main.go:141] libmachine: (cert-options-899685) </domain>
	I0729 18:44:24.419233  136235 main.go:141] libmachine: (cert-options-899685) 
	I0729 18:44:24.423500  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:4b:1f:a0 in network default
	I0729 18:44:24.424036  136235 main.go:141] libmachine: (cert-options-899685) Ensuring networks are active...
	I0729 18:44:24.424054  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:24.424658  136235 main.go:141] libmachine: (cert-options-899685) Ensuring network default is active
	I0729 18:44:24.424965  136235 main.go:141] libmachine: (cert-options-899685) Ensuring network mk-cert-options-899685 is active
	I0729 18:44:24.425541  136235 main.go:141] libmachine: (cert-options-899685) Getting domain xml...
	I0729 18:44:24.426323  136235 main.go:141] libmachine: (cert-options-899685) Creating domain...
	I0729 18:44:24.752152  136235 main.go:141] libmachine: (cert-options-899685) Waiting to get IP...
	I0729 18:44:24.753112  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:24.753509  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:24.753544  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:24.753484  136258 retry.go:31] will retry after 207.106358ms: waiting for machine to come up
	I0729 18:44:24.961906  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:24.962383  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:24.962402  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:24.962339  136258 retry.go:31] will retry after 357.687747ms: waiting for machine to come up
	I0729 18:44:25.321844  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:25.322325  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:25.322345  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:25.322276  136258 retry.go:31] will retry after 385.995333ms: waiting for machine to come up
	I0729 18:44:25.709980  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:25.710443  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:25.710473  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:25.710390  136258 retry.go:31] will retry after 502.221316ms: waiting for machine to come up
	I0729 18:44:26.213947  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:26.214399  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:26.214421  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:26.214345  136258 retry.go:31] will retry after 575.813211ms: waiting for machine to come up
	I0729 18:44:26.792300  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:26.792961  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:26.792984  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:26.792908  136258 retry.go:31] will retry after 932.379992ms: waiting for machine to come up
	I0729 18:44:27.726468  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:27.726988  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:27.727047  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:27.726953  136258 retry.go:31] will retry after 940.345986ms: waiting for machine to come up
	I0729 18:44:28.668378  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:28.668842  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:28.668871  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:28.668799  136258 retry.go:31] will retry after 1.351702492s: waiting for machine to come up
	I0729 18:44:24.546638  135833 pod_ready.go:102] pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace has status "Ready":"False"
	I0729 18:44:26.549884  135833 pod_ready.go:102] pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace has status "Ready":"False"
	I0729 18:44:28.544934  135833 pod_ready.go:92] pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:28.544956  135833 pod_ready.go:81] duration metric: took 6.006787981s for pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:28.544965  135833 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:30.021722  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:30.022220  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:30.022236  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:30.022178  136258 retry.go:31] will retry after 1.561639036s: waiting for machine to come up
	I0729 18:44:31.585049  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:31.585484  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:31.585508  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:31.585436  136258 retry.go:31] will retry after 1.864425608s: waiting for machine to come up
	I0729 18:44:33.452501  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:33.452971  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:33.453014  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:33.452933  136258 retry.go:31] will retry after 2.828025352s: waiting for machine to come up
	I0729 18:44:30.551631  135833 pod_ready.go:102] pod "etcd-pause-134415" in "kube-system" namespace has status "Ready":"False"
	I0729 18:44:32.552435  135833 pod_ready.go:102] pod "etcd-pause-134415" in "kube-system" namespace has status "Ready":"False"
	I0729 18:44:34.554612  135833 pod_ready.go:102] pod "etcd-pause-134415" in "kube-system" namespace has status "Ready":"False"
	I0729 18:44:37.052265  135833 pod_ready.go:92] pod "etcd-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.052290  135833 pod_ready.go:81] duration metric: took 8.507318327s for pod "etcd-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.052302  135833 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.056170  135833 pod_ready.go:92] pod "kube-apiserver-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.056191  135833 pod_ready.go:81] duration metric: took 3.882713ms for pod "kube-apiserver-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.056204  135833 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.060036  135833 pod_ready.go:92] pod "kube-controller-manager-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.060054  135833 pod_ready.go:81] duration metric: took 3.84252ms for pod "kube-controller-manager-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.060064  135833 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sm2kx" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.063842  135833 pod_ready.go:92] pod "kube-proxy-sm2kx" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.063860  135833 pod_ready.go:81] duration metric: took 3.788243ms for pod "kube-proxy-sm2kx" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.063879  135833 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.068180  135833 pod_ready.go:92] pod "kube-scheduler-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.068198  135833 pod_ready.go:81] duration metric: took 4.311284ms for pod "kube-scheduler-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.068206  135833 pod_ready.go:38] duration metric: took 14.534300931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:44:37.068227  135833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:44:37.080552  135833 ops.go:34] apiserver oom_adj: -16
	I0729 18:44:37.080567  135833 kubeadm.go:597] duration metric: took 22.009591792s to restartPrimaryControlPlane
	I0729 18:44:37.080575  135833 kubeadm.go:394] duration metric: took 22.290017715s to StartCluster
	I0729 18:44:37.080595  135833 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:44:37.080681  135833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:44:37.081459  135833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:44:37.081692  135833 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.77 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:44:37.081747  135833 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:44:37.081915  135833 config.go:182] Loaded profile config "pause-134415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:37.083310  135833 out.go:177] * Enabled addons: 
	I0729 18:44:37.083313  135833 out.go:177] * Verifying Kubernetes components...
	I0729 18:44:36.282343  136235 main.go:141] libmachine: (cert-options-899685) DBG | domain cert-options-899685 has defined MAC address 52:54:00:d0:99:63 in network mk-cert-options-899685
	I0729 18:44:36.282695  136235 main.go:141] libmachine: (cert-options-899685) DBG | unable to find current IP address of domain cert-options-899685 in network mk-cert-options-899685
	I0729 18:44:36.282714  136235 main.go:141] libmachine: (cert-options-899685) DBG | I0729 18:44:36.282662  136258 retry.go:31] will retry after 2.607564531s: waiting for machine to come up
	I0729 18:44:37.084476  135833 addons.go:510] duration metric: took 2.728493ms for enable addons: enabled=[]
	I0729 18:44:37.084539  135833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:44:37.243344  135833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:44:37.258077  135833 node_ready.go:35] waiting up to 6m0s for node "pause-134415" to be "Ready" ...
	I0729 18:44:37.261634  135833 node_ready.go:49] node "pause-134415" has status "Ready":"True"
	I0729 18:44:37.261657  135833 node_ready.go:38] duration metric: took 3.551951ms for node "pause-134415" to be "Ready" ...
	I0729 18:44:37.261668  135833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:44:37.451728  135833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.849946  135833 pod_ready.go:92] pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:37.849976  135833 pod_ready.go:81] duration metric: took 398.218174ms for pod "coredns-7db6d8ff4d-g6tp9" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:37.849986  135833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:38.250731  135833 pod_ready.go:92] pod "etcd-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:38.250758  135833 pod_ready.go:81] duration metric: took 400.764385ms for pod "etcd-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:38.250773  135833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:38.649367  135833 pod_ready.go:92] pod "kube-apiserver-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:38.649390  135833 pod_ready.go:81] duration metric: took 398.609477ms for pod "kube-apiserver-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:38.649400  135833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.048595  135833 pod_ready.go:92] pod "kube-controller-manager-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:39.048618  135833 pod_ready.go:81] duration metric: took 399.211804ms for pod "kube-controller-manager-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.048630  135833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sm2kx" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.448147  135833 pod_ready.go:92] pod "kube-proxy-sm2kx" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:39.448169  135833 pod_ready.go:81] duration metric: took 399.533067ms for pod "kube-proxy-sm2kx" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.448181  135833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.848426  135833 pod_ready.go:92] pod "kube-scheduler-pause-134415" in "kube-system" namespace has status "Ready":"True"
	I0729 18:44:39.848449  135833 pod_ready.go:81] duration metric: took 400.262206ms for pod "kube-scheduler-pause-134415" in "kube-system" namespace to be "Ready" ...
	I0729 18:44:39.848457  135833 pod_ready.go:38] duration metric: took 2.586777927s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:44:39.848475  135833 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:44:39.848526  135833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:44:39.863102  135833 api_server.go:72] duration metric: took 2.781378113s to wait for apiserver process to appear ...
	I0729 18:44:39.863126  135833 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:44:39.863142  135833 api_server.go:253] Checking apiserver healthz at https://192.168.61.77:8443/healthz ...
	I0729 18:44:39.868895  135833 api_server.go:279] https://192.168.61.77:8443/healthz returned 200:
	ok
	I0729 18:44:39.870400  135833 api_server.go:141] control plane version: v1.30.3
	I0729 18:44:39.870420  135833 api_server.go:131] duration metric: took 7.288369ms to wait for apiserver health ...
	I0729 18:44:39.870428  135833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:44:40.050674  135833 system_pods.go:59] 6 kube-system pods found
	I0729 18:44:40.050703  135833 system_pods.go:61] "coredns-7db6d8ff4d-g6tp9" [769ed268-b082-415e-b416-a14e68a0084f] Running
	I0729 18:44:40.050708  135833 system_pods.go:61] "etcd-pause-134415" [281750af-8362-4638-b0db-01f5f69bcd38] Running
	I0729 18:44:40.050711  135833 system_pods.go:61] "kube-apiserver-pause-134415" [8a19558c-02cd-472d-a10b-255ad2a3dc66] Running
	I0729 18:44:40.050715  135833 system_pods.go:61] "kube-controller-manager-pause-134415" [b40ab25f-2497-4fda-8e23-5e52e1150e55] Running
	I0729 18:44:40.050718  135833 system_pods.go:61] "kube-proxy-sm2kx" [6fa9e9f4-9f39-41f5-9d79-4f394201011f] Running
	I0729 18:44:40.050721  135833 system_pods.go:61] "kube-scheduler-pause-134415" [4e7d5033-71d1-4921-af2f-37f868cc0896] Running
	I0729 18:44:40.050728  135833 system_pods.go:74] duration metric: took 180.293904ms to wait for pod list to return data ...
	I0729 18:44:40.050737  135833 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:44:40.249885  135833 default_sa.go:45] found service account: "default"
	I0729 18:44:40.249913  135833 default_sa.go:55] duration metric: took 199.168372ms for default service account to be created ...
	I0729 18:44:40.249924  135833 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:44:40.451395  135833 system_pods.go:86] 6 kube-system pods found
	I0729 18:44:40.451422  135833 system_pods.go:89] "coredns-7db6d8ff4d-g6tp9" [769ed268-b082-415e-b416-a14e68a0084f] Running
	I0729 18:44:40.451427  135833 system_pods.go:89] "etcd-pause-134415" [281750af-8362-4638-b0db-01f5f69bcd38] Running
	I0729 18:44:40.451431  135833 system_pods.go:89] "kube-apiserver-pause-134415" [8a19558c-02cd-472d-a10b-255ad2a3dc66] Running
	I0729 18:44:40.451435  135833 system_pods.go:89] "kube-controller-manager-pause-134415" [b40ab25f-2497-4fda-8e23-5e52e1150e55] Running
	I0729 18:44:40.451441  135833 system_pods.go:89] "kube-proxy-sm2kx" [6fa9e9f4-9f39-41f5-9d79-4f394201011f] Running
	I0729 18:44:40.451445  135833 system_pods.go:89] "kube-scheduler-pause-134415" [4e7d5033-71d1-4921-af2f-37f868cc0896] Running
	I0729 18:44:40.451451  135833 system_pods.go:126] duration metric: took 201.522228ms to wait for k8s-apps to be running ...
	I0729 18:44:40.451461  135833 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:44:40.451516  135833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:44:40.466242  135833 system_svc.go:56] duration metric: took 14.771743ms WaitForService to wait for kubelet
	I0729 18:44:40.466273  135833 kubeadm.go:582] duration metric: took 3.384552318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:44:40.466296  135833 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:44:40.648575  135833 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:44:40.648604  135833 node_conditions.go:123] node cpu capacity is 2
	I0729 18:44:40.648618  135833 node_conditions.go:105] duration metric: took 182.31716ms to run NodePressure ...
	I0729 18:44:40.648630  135833 start.go:241] waiting for startup goroutines ...
	I0729 18:44:40.648636  135833 start.go:246] waiting for cluster config update ...
	I0729 18:44:40.648643  135833 start.go:255] writing updated cluster config ...
	I0729 18:44:40.648921  135833 ssh_runner.go:195] Run: rm -f paused
	I0729 18:44:40.696696  135833 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:44:40.698642  135833 out.go:177] * Done! kubectl is now configured to use "pause-134415" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.219771809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a00e38c-659f-4e05-8c50-f882b22109a3 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.221165443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4567a972-259d-4320-86b0-43aa72ce053b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.221780968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278683221760849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4567a972-259d-4320-86b0-43aa72ce053b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.222205327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=048210c8-f7e5-4b4c-8610-137194a717cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.222281160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=048210c8-f7e5-4b4c-8610-137194a717cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.222564927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b66ff6521b908bcec86efa40985e9c08b511e7d95699d4ed139720453d5061b0,PodSandboxId:ef6ff6ce1302b3edb507ec1f7f6c85daafef2b1d8f75b716be2e7155c05dbd9a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278661394605298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8735cf29fbe7ee72a648e3e0732fa8e64900af4331b75aeadd0f649f1fd34d,PodSandboxId:d6726e028ad3bf754d8c2d5c5e881e126f791f83b44769b4759d1350fcac27f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278661380885687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f442242c7d7c603e6f2b1211eefa28e15b68aff15d73335b267e55e0adde5a3,PodSandboxId:92f73b06af82533868360d6850309d3bb73346f417cb3553fd49d40b2668e985,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278657544061766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbd6a3a4556fbbe7dabed9875ca38f31a24db9897cfc76e781f41a6446420a2,PodSandboxId:92a0282e7c557cf3c0fcce3aa2abf883e895bb8c99697d6933284596e015e448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278657566043011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7585830f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f087baee3eebf09602b5efdfddd38c12456421d72eaba77ea9ff2ec9ceadc0bc,PodSandboxId:67f46f43207ec552846f5023fccdc6a57a5ca9d2a69c7506f2e13dfec4552a16,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278657570408829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernete
s.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e61889f641f5a09c6a334e7d94e7e05adbe55d8bbd3c04cb1dfad91bf26b67a,PodSandboxId:61be98a20264283a28f002672dbfd5add6b063a7004449b35be68226c103c25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278657552913010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff,PodSandboxId:59230aff02baeb35dea12224df4120ac6e8475d895b4528515ff4d0ffe2c949a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278652675737170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]st
ring{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd,PodSandboxId:9a05fc85efe4c259433f1431b02dc8e2bc4c2065271649291c91d3ce6d87767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278652666715695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernetes.
container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308,PodSandboxId:7764942ab46adf2d3699dd1759ad36ba65545efad294b30b60d454a0222ea17f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278652604629075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 758
5830f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f,PodSandboxId:a51f9d4338cfb90cbdfcd2dbc9a3e4f9dbf733b60d7a2b4ef8e6ff461387f1b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278652562880547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annotations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a,PodSandboxId:e5f0aabc65bd5caa25962dab8f4ad095d8b45060a17323b5c3e26da47c54b824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278652168250755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88,PodSandboxId:ab0446dbd9dbbf2a1944c4f41f4b574ecc97916dd2c6cd62c809094edf404a78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278626319402914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=048210c8-f7e5-4b4c-8610-137194a717cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.269541909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e7875b5-9f78-44c1-90b4-2e4467e27135 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.269615762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e7875b5-9f78-44c1-90b4-2e4467e27135 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.270770110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56ab0def-3602-492a-82d3-6e805b02a2a0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.271113686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278683271093817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56ab0def-3602-492a-82d3-6e805b02a2a0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.271734056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b72f1658-814d-44e0-893f-45cdf2ef9688 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.271784302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b72f1658-814d-44e0-893f-45cdf2ef9688 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.272205422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b66ff6521b908bcec86efa40985e9c08b511e7d95699d4ed139720453d5061b0,PodSandboxId:ef6ff6ce1302b3edb507ec1f7f6c85daafef2b1d8f75b716be2e7155c05dbd9a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278661394605298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8735cf29fbe7ee72a648e3e0732fa8e64900af4331b75aeadd0f649f1fd34d,PodSandboxId:d6726e028ad3bf754d8c2d5c5e881e126f791f83b44769b4759d1350fcac27f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278661380885687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f442242c7d7c603e6f2b1211eefa28e15b68aff15d73335b267e55e0adde5a3,PodSandboxId:92f73b06af82533868360d6850309d3bb73346f417cb3553fd49d40b2668e985,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278657544061766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbd6a3a4556fbbe7dabed9875ca38f31a24db9897cfc76e781f41a6446420a2,PodSandboxId:92a0282e7c557cf3c0fcce3aa2abf883e895bb8c99697d6933284596e015e448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278657566043011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7585830f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f087baee3eebf09602b5efdfddd38c12456421d72eaba77ea9ff2ec9ceadc0bc,PodSandboxId:67f46f43207ec552846f5023fccdc6a57a5ca9d2a69c7506f2e13dfec4552a16,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278657570408829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernete
s.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e61889f641f5a09c6a334e7d94e7e05adbe55d8bbd3c04cb1dfad91bf26b67a,PodSandboxId:61be98a20264283a28f002672dbfd5add6b063a7004449b35be68226c103c25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278657552913010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff,PodSandboxId:59230aff02baeb35dea12224df4120ac6e8475d895b4528515ff4d0ffe2c949a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278652675737170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]st
ring{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd,PodSandboxId:9a05fc85efe4c259433f1431b02dc8e2bc4c2065271649291c91d3ce6d87767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278652666715695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernetes.
container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308,PodSandboxId:7764942ab46adf2d3699dd1759ad36ba65545efad294b30b60d454a0222ea17f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278652604629075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 758
5830f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f,PodSandboxId:a51f9d4338cfb90cbdfcd2dbc9a3e4f9dbf733b60d7a2b4ef8e6ff461387f1b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278652562880547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annotations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a,PodSandboxId:e5f0aabc65bd5caa25962dab8f4ad095d8b45060a17323b5c3e26da47c54b824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278652168250755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88,PodSandboxId:ab0446dbd9dbbf2a1944c4f41f4b574ecc97916dd2c6cd62c809094edf404a78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278626319402914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b72f1658-814d-44e0-893f-45cdf2ef9688 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.319681903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32f554e5-aa8f-47e0-99b1-ba62946525e8 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.319757496Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32f554e5-aa8f-47e0-99b1-ba62946525e8 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.321497891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c594c87-a284-42c8-a2ce-ca81f66d82bc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.321838181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278683321816885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c594c87-a284-42c8-a2ce-ca81f66d82bc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.322309059Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3c8140d-7378-426e-b241-cf482ec5dbe2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.322358272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3c8140d-7378-426e-b241-cf482ec5dbe2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.322642741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b66ff6521b908bcec86efa40985e9c08b511e7d95699d4ed139720453d5061b0,PodSandboxId:ef6ff6ce1302b3edb507ec1f7f6c85daafef2b1d8f75b716be2e7155c05dbd9a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278661394605298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8735cf29fbe7ee72a648e3e0732fa8e64900af4331b75aeadd0f649f1fd34d,PodSandboxId:d6726e028ad3bf754d8c2d5c5e881e126f791f83b44769b4759d1350fcac27f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278661380885687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f442242c7d7c603e6f2b1211eefa28e15b68aff15d73335b267e55e0adde5a3,PodSandboxId:92f73b06af82533868360d6850309d3bb73346f417cb3553fd49d40b2668e985,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278657544061766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbd6a3a4556fbbe7dabed9875ca38f31a24db9897cfc76e781f41a6446420a2,PodSandboxId:92a0282e7c557cf3c0fcce3aa2abf883e895bb8c99697d6933284596e015e448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278657566043011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7585830f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f087baee3eebf09602b5efdfddd38c12456421d72eaba77ea9ff2ec9ceadc0bc,PodSandboxId:67f46f43207ec552846f5023fccdc6a57a5ca9d2a69c7506f2e13dfec4552a16,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278657570408829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernete
s.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e61889f641f5a09c6a334e7d94e7e05adbe55d8bbd3c04cb1dfad91bf26b67a,PodSandboxId:61be98a20264283a28f002672dbfd5add6b063a7004449b35be68226c103c25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278657552913010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff,PodSandboxId:59230aff02baeb35dea12224df4120ac6e8475d895b4528515ff4d0ffe2c949a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278652675737170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]st
ring{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd,PodSandboxId:9a05fc85efe4c259433f1431b02dc8e2bc4c2065271649291c91d3ce6d87767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278652666715695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernetes.
container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308,PodSandboxId:7764942ab46adf2d3699dd1759ad36ba65545efad294b30b60d454a0222ea17f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278652604629075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 758
5830f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f,PodSandboxId:a51f9d4338cfb90cbdfcd2dbc9a3e4f9dbf733b60d7a2b4ef8e6ff461387f1b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278652562880547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annotations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a,PodSandboxId:e5f0aabc65bd5caa25962dab8f4ad095d8b45060a17323b5c3e26da47c54b824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278652168250755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88,PodSandboxId:ab0446dbd9dbbf2a1944c4f41f4b574ecc97916dd2c6cd62c809094edf404a78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278626319402914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3c8140d-7378-426e-b241-cf482ec5dbe2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.332687869Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d425ebf1-f465-43b4-9f3d-763d2a0789a7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.333187737Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ef6ff6ce1302b3edb507ec1f7f6c85daafef2b1d8f75b716be2e7155c05dbd9a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-g6tp9,Uid:769ed268-b082-415e-b416-a14e68a0084f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722278655269056935,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:43:45.688885267Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d6726e028ad3bf754d8c2d5c5e881e126f791f83b44769b4759d1350fcac27f9,Metadata:&PodSandboxMetadata{Name:kube-proxy-sm2kx,Uid:6fa9e9f4-9f39-41f5-9d79-4f394201011f,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1722278655065180934,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:43:45.526604749Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92a0282e7c557cf3c0fcce3aa2abf883e895bb8c99697d6933284596e015e448,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-134415,Uid:9fd238acbeb786e5ddde053dadb75eeb,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722278655057038107,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,tier: control-plane,},Annotations:map[string
]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.77:8443,kubernetes.io/config.hash: 9fd238acbeb786e5ddde053dadb75eeb,kubernetes.io/config.seen: 2024-07-29T18:43:29.489167893Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:61be98a20264283a28f002672dbfd5add6b063a7004449b35be68226c103c25a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-134415,Uid:bd8293080c772c37780ce473c40b2740,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722278654964606879,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bd8293080c772c37780ce473c40b2740,kubernetes.io/config.seen: 2024-07-29T18:43:29.489168976Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{
Id:92f73b06af82533868360d6850309d3bb73346f417cb3553fd49d40b2668e985,Metadata:&PodSandboxMetadata{Name:etcd-pause-134415,Uid:56a722ce892a235bcd005647b7378328,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722278654924073517,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.77:2379,kubernetes.io/config.hash: 56a722ce892a235bcd005647b7378328,kubernetes.io/config.seen: 2024-07-29T18:43:29.489163977Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:67f46f43207ec552846f5023fccdc6a57a5ca9d2a69c7506f2e13dfec4552a16,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-134415,Uid:60d4a8cc0f6d4b01cb667cee784c02b9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722278654912025297,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 60d4a8cc0f6d4b01cb667cee784c02b9,kubernetes.io/config.seen: 2024-07-29T18:43:29.489169973Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9a05fc85efe4c259433f1431b02dc8e2bc4c2065271649291c91d3ce6d87767f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-134415,Uid:60d4a8cc0f6d4b01cb667cee784c02b9,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722278652124030620,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/confi
g.hash: 60d4a8cc0f6d4b01cb667cee784c02b9,kubernetes.io/config.seen: 2024-07-29T18:43:29.489169973Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7764942ab46adf2d3699dd1759ad36ba65545efad294b30b60d454a0222ea17f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-134415,Uid:9fd238acbeb786e5ddde053dadb75eeb,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722278652122054668,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.77:8443,kubernetes.io/config.hash: 9fd238acbeb786e5ddde053dadb75eeb,kubernetes.io/config.seen: 2024-07-29T18:43:29.489167893Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:791de24ac120f1204e6dd4cdc348b82dbda994c9e2189e7e
6e0faf38b480153b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-g6tp9,Uid:769ed268-b082-415e-b416-a14e68a0084f,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722278652114011658,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:43:45.688885267Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59230aff02baeb35dea12224df4120ac6e8475d895b4528515ff4d0ffe2c949a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-134415,Uid:bd8293080c772c37780ce473c40b2740,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722278652106266235,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pau
se-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bd8293080c772c37780ce473c40b2740,kubernetes.io/config.seen: 2024-07-29T18:43:29.489168976Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a51f9d4338cfb90cbdfcd2dbc9a3e4f9dbf733b60d7a2b4ef8e6ff461387f1b1,Metadata:&PodSandboxMetadata{Name:etcd-pause-134415,Uid:56a722ce892a235bcd005647b7378328,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722278652102179306,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.77:2379,kubernetes.io/config.hash: 56a722ce892a235bcd005647b7378328,kubernetes.io/config.seen: 2024-07
-29T18:43:29.489163977Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e5f0aabc65bd5caa25962dab8f4ad095d8b45060a17323b5c3e26da47c54b824,Metadata:&PodSandboxMetadata{Name:kube-proxy-sm2kx,Uid:6fa9e9f4-9f39-41f5-9d79-4f394201011f,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722278651948317087,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:43:45.526604749Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab0446dbd9dbbf2a1944c4f41f4b574ecc97916dd2c6cd62c809094edf404a78,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-g6tp9,Uid:769ed268-b082-415e-b416-a14e68a0084f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:172227862599
7420595,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:43:45.688885267Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d425ebf1-f465-43b4-9f3d-763d2a0789a7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.333857638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=689924d7-c875-4185-a2e8-819cd24198be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.333906696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=689924d7-c875-4185-a2e8-819cd24198be name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:44:43 pause-134415 crio[2761]: time="2024-07-29 18:44:43.334131648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b66ff6521b908bcec86efa40985e9c08b511e7d95699d4ed139720453d5061b0,PodSandboxId:ef6ff6ce1302b3edb507ec1f7f6c85daafef2b1d8f75b716be2e7155c05dbd9a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278661394605298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8735cf29fbe7ee72a648e3e0732fa8e64900af4331b75aeadd0f649f1fd34d,PodSandboxId:d6726e028ad3bf754d8c2d5c5e881e126f791f83b44769b4759d1350fcac27f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278661380885687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f442242c7d7c603e6f2b1211eefa28e15b68aff15d73335b267e55e0adde5a3,PodSandboxId:92f73b06af82533868360d6850309d3bb73346f417cb3553fd49d40b2668e985,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278657544061766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbd6a3a4556fbbe7dabed9875ca38f31a24db9897cfc76e781f41a6446420a2,PodSandboxId:92a0282e7c557cf3c0fcce3aa2abf883e895bb8c99697d6933284596e015e448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278657566043011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 7585830f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f087baee3eebf09602b5efdfddd38c12456421d72eaba77ea9ff2ec9ceadc0bc,PodSandboxId:67f46f43207ec552846f5023fccdc6a57a5ca9d2a69c7506f2e13dfec4552a16,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278657570408829,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernete
s.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e61889f641f5a09c6a334e7d94e7e05adbe55d8bbd3c04cb1dfad91bf26b67a,PodSandboxId:61be98a20264283a28f002672dbfd5add6b063a7004449b35be68226c103c25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278657552913010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff,PodSandboxId:59230aff02baeb35dea12224df4120ac6e8475d895b4528515ff4d0ffe2c949a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278652675737170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd8293080c772c37780ce473c40b2740,},Annotations:map[string]st
ring{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd,PodSandboxId:9a05fc85efe4c259433f1431b02dc8e2bc4c2065271649291c91d3ce6d87767f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278652666715695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d4a8cc0f6d4b01cb667cee784c02b9,},Annotations:map[string]string{io.kubernetes.
container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308,PodSandboxId:7764942ab46adf2d3699dd1759ad36ba65545efad294b30b60d454a0222ea17f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278652604629075,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fd238acbeb786e5ddde053dadb75eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 758
5830f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f,PodSandboxId:a51f9d4338cfb90cbdfcd2dbc9a3e4f9dbf733b60d7a2b4ef8e6ff461387f1b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278652562880547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-134415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a722ce892a235bcd005647b7378328,},Annotations:map[string]string{io.kubernetes.container.hash: 6e88c83,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a,PodSandboxId:e5f0aabc65bd5caa25962dab8f4ad095d8b45060a17323b5c3e26da47c54b824,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278652168250755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sm2kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa9e9f4-9f39-41f5-9d79-4f394201011f,},Annotations:map[string]string{io.kubernetes.container.hash: 2fc3ab31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88,PodSandboxId:ab0446dbd9dbbf2a1944c4f41f4b574ecc97916dd2c6cd62c809094edf404a78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278626319402914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g6tp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 769ed268-b082-415e-b416-a14e68a0084f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a31b561,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=689924d7-c875-4185-a2e8-819cd24198be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b66ff6521b908       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago      Running             coredns                   1                   ef6ff6ce1302b       coredns-7db6d8ff4d-g6tp9
	0e8735cf29fbe       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   22 seconds ago      Running             kube-proxy                2                   d6726e028ad3b       kube-proxy-sm2kx
	f087baee3eebf       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   25 seconds ago      Running             kube-scheduler            2                   67f46f43207ec       kube-scheduler-pause-134415
	4bbd6a3a4556f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   25 seconds ago      Running             kube-apiserver            2                   92a0282e7c557       kube-apiserver-pause-134415
	1e61889f641f5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   25 seconds ago      Running             kube-controller-manager   2                   61be98a202642       kube-controller-manager-pause-134415
	9f442242c7d7c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago      Running             etcd                      2                   92f73b06af825       etcd-pause-134415
	68ff3286f312a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   30 seconds ago      Exited              kube-controller-manager   1                   59230aff02bae       kube-controller-manager-pause-134415
	e4cea604b117c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   30 seconds ago      Exited              kube-scheduler            1                   9a05fc85efe4c       kube-scheduler-pause-134415
	d389fc1bf4eeb       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   30 seconds ago      Exited              kube-apiserver            1                   7764942ab46ad       kube-apiserver-pause-134415
	7844ba6af4b03       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   30 seconds ago      Exited              etcd                      1                   a51f9d4338cfb       etcd-pause-134415
	4dab08bb9b264       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   31 seconds ago      Exited              kube-proxy                1                   e5f0aabc65bd5       kube-proxy-sm2kx
	e50cff59ddb5d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   57 seconds ago      Exited              coredns                   0                   ab0446dbd9dbb       coredns-7db6d8ff4d-g6tp9
	
	
	==> coredns [b66ff6521b908bcec86efa40985e9c08b511e7d95699d4ed139720453d5061b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35449 - 49410 "HINFO IN 4671401912247073721.1201125565082427322. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009955718s
	
	
	==> coredns [e50cff59ddb5d30844f6780ac1ffa4e2b09054f5cd6846bf7aedce6c54433e88] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58158 - 1188 "HINFO IN 7325360415050638714.4383664452241786804. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017025551s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1557520512]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:43:46.526) (total time: 16967ms):
	Trace[1557520512]: [16.96727735s] [16.96727735s] END
	[INFO] plugin/kubernetes: Trace[808941779]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:43:46.525) (total time: 16969ms):
	Trace[808941779]: [16.969201841s] [16.969201841s] END
	[INFO] plugin/kubernetes: Trace[1185478479]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:43:46.525) (total time: 16969ms):
	Trace[1185478479]: [16.969309099s] [16.969309099s] END
	
	
	==> describe nodes <==
	Name:               pause-134415
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-134415
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=pause-134415
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_43_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:43:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-134415
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:44:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:44:20 +0000   Mon, 29 Jul 2024 18:43:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:44:20 +0000   Mon, 29 Jul 2024 18:43:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:44:20 +0000   Mon, 29 Jul 2024 18:43:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:44:20 +0000   Mon, 29 Jul 2024 18:43:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.77
	  Hostname:    pause-134415
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 19cbbd61a6e5474cb723ecddf35bc51b
	  System UUID:                19cbbd61-a6e5-474c-b723-ecddf35bc51b
	  Boot ID:                    9715c901-bdc1-40d4-acda-0a539b9ea554
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-g6tp9                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-pause-134415                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 kube-apiserver-pause-134415             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-pause-134415    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-sm2kx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-pause-134415             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)  kubelet          Node pause-134415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)  kubelet          Node pause-134415 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x7 over 80s)  kubelet          Node pause-134415 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    74s                kubelet          Node pause-134415 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  74s                kubelet          Node pause-134415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     74s                kubelet          Node pause-134415 status is now: NodeHasSufficientPID
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeReady                73s                kubelet          Node pause-134415 status is now: NodeReady
	  Normal  RegisteredNode           61s                node-controller  Node pause-134415 event: Registered Node pause-134415 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-134415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-134415 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-134415 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-134415 event: Registered Node pause-134415 in Controller
	
	
	==> dmesg <==
	[  +8.828988] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.063536] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070085] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.187204] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.112096] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.271387] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.467235] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.074226] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.146178] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.083024] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.012367] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.075923] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.214377] systemd-fstab-generator[1485]: Ignoring "noauto" option for root device
	[  +0.078992] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 18:44] systemd-fstab-generator[2128]: Ignoring "noauto" option for root device
	[  +0.047387] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.125339] systemd-fstab-generator[2176]: Ignoring "noauto" option for root device
	[  +0.266792] systemd-fstab-generator[2224]: Ignoring "noauto" option for root device
	[  +0.295570] systemd-fstab-generator[2351]: Ignoring "noauto" option for root device
	[  +0.744390] systemd-fstab-generator[2641]: Ignoring "noauto" option for root device
	[  +1.032543] systemd-fstab-generator[2916]: Ignoring "noauto" option for root device
	[  +2.617927] systemd-fstab-generator[3349]: Ignoring "noauto" option for root device
	[  +0.073023] kauditd_printk_skb: 238 callbacks suppressed
	[ +16.199637] kauditd_printk_skb: 49 callbacks suppressed
	[  +4.007503] systemd-fstab-generator[3770]: Ignoring "noauto" option for root device
	
	
	==> etcd [7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f] <==
	{"level":"info","ts":"2024-07-29T18:44:13.014303Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"18.600101ms"}
	{"level":"info","ts":"2024-07-29T18:44:13.079358Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T18:44:13.159964Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"8c1b28ae7d8b253d","local-member-id":"8989ab7f8b274152","commit-index":431}
	{"level":"info","ts":"2024-07-29T18:44:13.16008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T18:44:13.160135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 became follower at term 2"}
	{"level":"info","ts":"2024-07-29T18:44:13.160147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8989ab7f8b274152 [peers: [], term: 2, commit: 431, applied: 0, lastindex: 431, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T18:44:13.190611Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T18:44:13.228997Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":415}
	{"level":"info","ts":"2024-07-29T18:44:13.258815Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T18:44:13.265512Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8989ab7f8b274152","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:44:13.269724Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8989ab7f8b274152"}
	{"level":"info","ts":"2024-07-29T18:44:13.269983Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8989ab7f8b274152","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T18:44:13.27065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 switched to configuration voters=(9910641019289289042)"}
	{"level":"info","ts":"2024-07-29T18:44:13.273936Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8c1b28ae7d8b253d","local-member-id":"8989ab7f8b274152","added-peer-id":"8989ab7f8b274152","added-peer-peer-urls":["https://192.168.61.77:2380"]}
	{"level":"info","ts":"2024-07-29T18:44:13.274611Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8c1b28ae7d8b253d","local-member-id":"8989ab7f8b274152","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:44:13.277775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:44:13.280939Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8989ab7f8b274152","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-29T18:44:13.284737Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:44:13.319336Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:44:13.323189Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:44:13.337097Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.77:2380"}
	{"level":"info","ts":"2024-07-29T18:44:13.33757Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.77:2380"}
	{"level":"info","ts":"2024-07-29T18:44:13.332414Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T18:44:13.343731Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8989ab7f8b274152","initial-advertise-peer-urls":["https://192.168.61.77:2380"],"listen-peer-urls":["https://192.168.61.77:2380"],"advertise-client-urls":["https://192.168.61.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T18:44:13.343763Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [9f442242c7d7c603e6f2b1211eefa28e15b68aff15d73335b267e55e0adde5a3] <==
	{"level":"info","ts":"2024-07-29T18:44:18.158691Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:44:18.158726Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T18:44:18.160687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 switched to configuration voters=(9910641019289289042)"}
	{"level":"info","ts":"2024-07-29T18:44:18.163549Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8c1b28ae7d8b253d","local-member-id":"8989ab7f8b274152","added-peer-id":"8989ab7f8b274152","added-peer-peer-urls":["https://192.168.61.77:2380"]}
	{"level":"info","ts":"2024-07-29T18:44:18.1637Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8c1b28ae7d8b253d","local-member-id":"8989ab7f8b274152","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:44:18.163768Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T18:44:18.16794Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T18:44:18.168137Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8989ab7f8b274152","initial-advertise-peer-urls":["https://192.168.61.77:2380"],"listen-peer-urls":["https://192.168.61.77:2380"],"advertise-client-urls":["https://192.168.61.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T18:44:18.168183Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T18:44:18.168288Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.77:2380"}
	{"level":"info","ts":"2024-07-29T18:44:18.168312Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.77:2380"}
	{"level":"info","ts":"2024-07-29T18:44:19.272139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T18:44:19.272242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T18:44:19.272279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 received MsgPreVoteResp from 8989ab7f8b274152 at term 2"}
	{"level":"info","ts":"2024-07-29T18:44:19.272309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T18:44:19.272333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 received MsgVoteResp from 8989ab7f8b274152 at term 3"}
	{"level":"info","ts":"2024-07-29T18:44:19.27236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8989ab7f8b274152 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T18:44:19.272385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8989ab7f8b274152 elected leader 8989ab7f8b274152 at term 3"}
	{"level":"info","ts":"2024-07-29T18:44:19.279628Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:44:19.279897Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T18:44:19.279627Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8989ab7f8b274152","local-member-attributes":"{Name:pause-134415 ClientURLs:[https://192.168.61.77:2379]}","request-path":"/0/members/8989ab7f8b274152/attributes","cluster-id":"8c1b28ae7d8b253d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T18:44:19.280169Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T18:44:19.280196Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T18:44:19.28181Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.77:2379"}
	{"level":"info","ts":"2024-07-29T18:44:19.281913Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:44:43 up 1 min,  0 users,  load average: 1.00, 0.39, 0.14
	Linux pause-134415 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4bbd6a3a4556fbbe7dabed9875ca38f31a24db9897cfc76e781f41a6446420a2] <==
	I0729 18:44:20.738164       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 18:44:20.738351       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 18:44:20.738381       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 18:44:20.738577       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 18:44:20.741611       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 18:44:20.744162       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 18:44:20.755762       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 18:44:20.755869       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 18:44:20.755911       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:44:20.755928       1 policy_source.go:224] refreshing policies
	E0729 18:44:20.758985       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 18:44:20.760803       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 18:44:20.767986       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 18:44:20.768050       1 aggregator.go:165] initial CRD sync complete...
	I0729 18:44:20.768091       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 18:44:20.768114       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 18:44:20.768136       1 cache.go:39] Caches are synced for autoregister controller
	I0729 18:44:21.556184       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 18:44:22.359936       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 18:44:22.371391       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 18:44:22.408741       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 18:44:22.438366       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 18:44:22.445789       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 18:44:33.095390       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 18:44:33.219206       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308] <==
	I0729 18:44:13.241145       1 options.go:221] external host was not specified, using 192.168.61.77
	I0729 18:44:13.242173       1 server.go:148] Version: v1.30.3
	I0729 18:44:13.242217       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [1e61889f641f5a09c6a334e7d94e7e05adbe55d8bbd3c04cb1dfad91bf26b67a] <==
	I0729 18:44:33.087321       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 18:44:33.092581       1 shared_informer.go:320] Caches are synced for job
	I0729 18:44:33.095788       1 shared_informer.go:320] Caches are synced for deployment
	I0729 18:44:33.097810       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 18:44:33.100794       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 18:44:33.103812       1 shared_informer.go:320] Caches are synced for taint
	I0729 18:44:33.104128       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 18:44:33.104324       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-134415"
	I0729 18:44:33.104393       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 18:44:33.104967       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 18:44:33.107694       1 shared_informer.go:320] Caches are synced for GC
	I0729 18:44:33.111512       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 18:44:33.120075       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:44:33.120360       1 shared_informer.go:320] Caches are synced for HPA
	I0729 18:44:33.127818       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:44:33.147201       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 18:44:33.153018       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 18:44:33.165572       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 18:44:33.169257       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 18:44:33.170678       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 18:44:33.177306       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.948752ms"
	I0729 18:44:33.177515       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.851µs"
	I0729 18:44:33.538785       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 18:44:33.538822       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 18:44:33.572029       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff] <==
	
	
	==> kube-proxy [0e8735cf29fbe7ee72a648e3e0732fa8e64900af4331b75aeadd0f649f1fd34d] <==
	I0729 18:44:21.572388       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:44:21.589234       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.77"]
	I0729 18:44:21.660012       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:44:21.660083       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:44:21.660103       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:44:21.663214       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:44:21.663559       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:44:21.663592       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:44:21.664812       1 config.go:192] "Starting service config controller"
	I0729 18:44:21.664849       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:44:21.665342       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:44:21.665374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:44:21.665940       1 config.go:319] "Starting node config controller"
	I0729 18:44:21.665970       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:44:21.765570       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:44:21.765688       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:44:21.766211       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a] <==
	
	
	==> kube-scheduler [e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd] <==
	
	
	==> kube-scheduler [f087baee3eebf09602b5efdfddd38c12456421d72eaba77ea9ff2ec9ceadc0bc] <==
	I0729 18:44:18.605989       1 serving.go:380] Generated self-signed cert in-memory
	W0729 18:44:20.634633       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 18:44:20.634825       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:44:20.634945       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 18:44:20.634979       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 18:44:20.671700       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 18:44:20.671798       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:44:20.675669       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 18:44:20.675759       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 18:44:20.676346       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 18:44:20.676401       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 18:44:20.776059       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.308672    3356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd8293080c772c37780ce473c40b2740-ca-certs\") pod \"kube-controller-manager-pause-134415\" (UID: \"bd8293080c772c37780ce473c40b2740\") " pod="kube-system/kube-controller-manager-pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.308709    3356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd8293080c772c37780ce473c40b2740-kubeconfig\") pod \"kube-controller-manager-pause-134415\" (UID: \"bd8293080c772c37780ce473c40b2740\") " pod="kube-system/kube-controller-manager-pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.308726    3356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60d4a8cc0f6d4b01cb667cee784c02b9-kubeconfig\") pod \"kube-scheduler-pause-134415\" (UID: \"60d4a8cc0f6d4b01cb667cee784c02b9\") " pod="kube-system/kube-scheduler-pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.356324    3356 kubelet_node_status.go:73] "Attempting to register node" node="pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: E0729 18:44:17.357317    3356 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.77:8443: connect: connection refused" node="pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.527583    3356 scope.go:117] "RemoveContainer" containerID="68ff3286f312a7e6cc7f7eda1f926956aa3e408ea8ac569394382c5e2f65caff"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.527834    3356 scope.go:117] "RemoveContainer" containerID="e4cea604b117c735de1f3732a68407d4d9edbfe5bc517161f76a1e8b4f540edd"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.528862    3356 scope.go:117] "RemoveContainer" containerID="d389fc1bf4eeb57dd955d429698dc986118bfab83fb8f94539d6288ec897d308"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.529526    3356 scope.go:117] "RemoveContainer" containerID="7844ba6af4b037548772d7ba9aab3b599cc5b42ffa5763882a3370c48658933f"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: E0729 18:44:17.657333    3356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-134415?timeout=10s\": dial tcp 192.168.61.77:8443: connect: connection refused" interval="800ms"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: I0729 18:44:17.759900    3356 kubelet_node_status.go:73] "Attempting to register node" node="pause-134415"
	Jul 29 18:44:17 pause-134415 kubelet[3356]: E0729 18:44:17.760976    3356 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.77:8443: connect: connection refused" node="pause-134415"
	Jul 29 18:44:18 pause-134415 kubelet[3356]: I0729 18:44:18.562557    3356 kubelet_node_status.go:73] "Attempting to register node" node="pause-134415"
	Jul 29 18:44:20 pause-134415 kubelet[3356]: I0729 18:44:20.851199    3356 kubelet_node_status.go:112] "Node was previously registered" node="pause-134415"
	Jul 29 18:44:20 pause-134415 kubelet[3356]: I0729 18:44:20.851296    3356 kubelet_node_status.go:76] "Successfully registered node" node="pause-134415"
	Jul 29 18:44:20 pause-134415 kubelet[3356]: I0729 18:44:20.853242    3356 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 18:44:20 pause-134415 kubelet[3356]: I0729 18:44:20.854294    3356 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.049673    3356 apiserver.go:52] "Watching apiserver"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.053582    3356 topology_manager.go:215] "Topology Admit Handler" podUID="6fa9e9f4-9f39-41f5-9d79-4f394201011f" podNamespace="kube-system" podName="kube-proxy-sm2kx"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.054945    3356 topology_manager.go:215] "Topology Admit Handler" podUID="769ed268-b082-415e-b416-a14e68a0084f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g6tp9"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.057106    3356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fa9e9f4-9f39-41f5-9d79-4f394201011f-xtables-lock\") pod \"kube-proxy-sm2kx\" (UID: \"6fa9e9f4-9f39-41f5-9d79-4f394201011f\") " pod="kube-system/kube-proxy-sm2kx"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.057164    3356 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fa9e9f4-9f39-41f5-9d79-4f394201011f-lib-modules\") pod \"kube-proxy-sm2kx\" (UID: \"6fa9e9f4-9f39-41f5-9d79-4f394201011f\") " pod="kube-system/kube-proxy-sm2kx"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.155100    3356 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 18:44:21 pause-134415 kubelet[3356]: I0729 18:44:21.354706    3356 scope.go:117] "RemoveContainer" containerID="4dab08bb9b264b26d33674a128e2c7a091b0aa71b1a13ea91a0075b95ffb3c7a"
	Jul 29 18:44:28 pause-134415 kubelet[3356]: I0729 18:44:28.154324    3356 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-134415 -n pause-134415
helpers_test.go:261: (dbg) Run:  kubectl --context pause-134415 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (55.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (268.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-834964 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-834964 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m28.618733624s)

                                                
                                                
-- stdout --
	* [old-k8s-version-834964] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-834964" primary control-plane node in "old-k8s-version-834964" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:49:14.848733  146540 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:49:14.849064  146540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:49:14.849076  146540 out.go:304] Setting ErrFile to fd 2...
	I0729 18:49:14.849082  146540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:49:14.849391  146540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:49:14.850117  146540 out.go:298] Setting JSON to false
	I0729 18:49:14.851617  146540 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12675,"bootTime":1722266280,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:49:14.851697  146540 start.go:139] virtualization: kvm guest
	I0729 18:49:14.853982  146540 out.go:177] * [old-k8s-version-834964] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:49:14.855701  146540 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:49:14.855702  146540 notify.go:220] Checking for updates...
	I0729 18:49:14.858144  146540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:49:14.859348  146540 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:49:14.860751  146540 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:49:14.861921  146540 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:49:14.863182  146540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:49:14.864766  146540 config.go:182] Loaded profile config "bridge-085245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:49:14.864909  146540 config.go:182] Loaded profile config "calico-085245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:49:14.865007  146540 config.go:182] Loaded profile config "cert-expiration-974855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:49:14.865114  146540 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:49:14.903323  146540 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:49:14.904598  146540 start.go:297] selected driver: kvm2
	I0729 18:49:14.904611  146540 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:49:14.904626  146540 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:49:14.905366  146540 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:49:14.905456  146540 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:49:14.921206  146540 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:49:14.921262  146540 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:49:14.921497  146540 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:49:14.921526  146540 cni.go:84] Creating CNI manager for ""
	I0729 18:49:14.921536  146540 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:49:14.921548  146540 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:49:14.921610  146540 start.go:340] cluster config:
	{Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:49:14.921742  146540 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:49:14.923596  146540 out.go:177] * Starting "old-k8s-version-834964" primary control-plane node in "old-k8s-version-834964" cluster
	I0729 18:49:14.924883  146540 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:49:14.924920  146540 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:49:14.924929  146540 cache.go:56] Caching tarball of preloaded images
	I0729 18:49:14.925015  146540 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:49:14.925027  146540 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:49:14.925137  146540 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/config.json ...
	I0729 18:49:14.925160  146540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/config.json: {Name:mkaa41da62a8994662ab6db090f957f1a1e041fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:49:14.925300  146540 start.go:360] acquireMachinesLock for old-k8s-version-834964: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:49:14.925337  146540 start.go:364] duration metric: took 15.724µs to acquireMachinesLock for "old-k8s-version-834964"
	I0729 18:49:14.925357  146540 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:49:14.925445  146540 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:49:14.927101  146540 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:49:14.927257  146540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:49:14.927304  146540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:49:14.942908  146540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0729 18:49:14.943366  146540 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:49:14.944051  146540 main.go:141] libmachine: Using API Version  1
	I0729 18:49:14.944077  146540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:49:14.944525  146540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:49:14.944773  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetMachineName
	I0729 18:49:14.945011  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:49:14.945181  146540 start.go:159] libmachine.API.Create for "old-k8s-version-834964" (driver="kvm2")
	I0729 18:49:14.945238  146540 client.go:168] LocalClient.Create starting
	I0729 18:49:14.945281  146540 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem
	I0729 18:49:14.945324  146540 main.go:141] libmachine: Decoding PEM data...
	I0729 18:49:14.945348  146540 main.go:141] libmachine: Parsing certificate...
	I0729 18:49:14.945424  146540 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem
	I0729 18:49:14.945452  146540 main.go:141] libmachine: Decoding PEM data...
	I0729 18:49:14.945471  146540 main.go:141] libmachine: Parsing certificate...
	I0729 18:49:14.945495  146540 main.go:141] libmachine: Running pre-create checks...
	I0729 18:49:14.945511  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .PreCreateCheck
	I0729 18:49:14.945928  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetConfigRaw
	I0729 18:49:14.946324  146540 main.go:141] libmachine: Creating machine...
	I0729 18:49:14.946338  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .Create
	I0729 18:49:14.946469  146540 main.go:141] libmachine: (old-k8s-version-834964) Creating KVM machine...
	I0729 18:49:14.947934  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found existing default KVM network
	I0729 18:49:14.949733  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:14.949535  146562 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2a:46:27} reservation:<nil>}
	I0729 18:49:14.950764  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:14.950668  146562 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:7b:af:8a} reservation:<nil>}
	I0729 18:49:14.952237  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:14.952144  146562 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a30d0}
	I0729 18:49:14.952260  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | created network xml: 
	I0729 18:49:14.952277  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | <network>
	I0729 18:49:14.952292  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG |   <name>mk-old-k8s-version-834964</name>
	I0729 18:49:14.952309  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG |   <dns enable='no'/>
	I0729 18:49:14.952321  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG |   
	I0729 18:49:14.952333  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0729 18:49:14.952346  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG |     <dhcp>
	I0729 18:49:14.952356  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0729 18:49:14.952364  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG |     </dhcp>
	I0729 18:49:14.952372  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG |   </ip>
	I0729 18:49:14.952380  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG |   
	I0729 18:49:14.952396  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | </network>
	I0729 18:49:14.952409  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | 
	I0729 18:49:14.957938  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | trying to create private KVM network mk-old-k8s-version-834964 192.168.61.0/24...
	I0729 18:49:15.038522  146540 main.go:141] libmachine: (old-k8s-version-834964) Setting up store path in /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964 ...
	I0729 18:49:15.038558  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | private KVM network mk-old-k8s-version-834964 192.168.61.0/24 created
	I0729 18:49:15.038572  146540 main.go:141] libmachine: (old-k8s-version-834964) Building disk image from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 18:49:15.038593  146540 main.go:141] libmachine: (old-k8s-version-834964) Downloading /home/jenkins/minikube-integration/19339-88081/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0729 18:49:15.038608  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:15.037415  146562 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:49:15.314132  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:15.313989  146562 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa...
	I0729 18:49:15.572299  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:15.572165  146562 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/old-k8s-version-834964.rawdisk...
	I0729 18:49:15.572327  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Writing magic tar header
	I0729 18:49:15.572340  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Writing SSH key tar header
	I0729 18:49:15.572352  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:15.572296  146562 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964 ...
	I0729 18:49:15.572446  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964
	I0729 18:49:15.572476  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube/machines
	I0729 18:49:15.572518  146540 main.go:141] libmachine: (old-k8s-version-834964) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964 (perms=drwx------)
	I0729 18:49:15.572532  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:49:15.572543  146540 main.go:141] libmachine: (old-k8s-version-834964) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:49:15.572560  146540 main.go:141] libmachine: (old-k8s-version-834964) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081/.minikube (perms=drwxr-xr-x)
	I0729 18:49:15.572578  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19339-88081
	I0729 18:49:15.572591  146540 main.go:141] libmachine: (old-k8s-version-834964) Setting executable bit set on /home/jenkins/minikube-integration/19339-88081 (perms=drwxrwxr-x)
	I0729 18:49:15.572603  146540 main.go:141] libmachine: (old-k8s-version-834964) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:49:15.572612  146540 main.go:141] libmachine: (old-k8s-version-834964) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:49:15.572623  146540 main.go:141] libmachine: (old-k8s-version-834964) Creating domain...
	I0729 18:49:15.572639  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:49:15.572650  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:49:15.572661  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Checking permissions on dir: /home
	I0729 18:49:15.572675  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Skipping /home - not owner
	I0729 18:49:15.573798  146540 main.go:141] libmachine: (old-k8s-version-834964) define libvirt domain using xml: 
	I0729 18:49:15.573827  146540 main.go:141] libmachine: (old-k8s-version-834964) <domain type='kvm'>
	I0729 18:49:15.573838  146540 main.go:141] libmachine: (old-k8s-version-834964)   <name>old-k8s-version-834964</name>
	I0729 18:49:15.573846  146540 main.go:141] libmachine: (old-k8s-version-834964)   <memory unit='MiB'>2200</memory>
	I0729 18:49:15.573855  146540 main.go:141] libmachine: (old-k8s-version-834964)   <vcpu>2</vcpu>
	I0729 18:49:15.573865  146540 main.go:141] libmachine: (old-k8s-version-834964)   <features>
	I0729 18:49:15.573874  146540 main.go:141] libmachine: (old-k8s-version-834964)     <acpi/>
	I0729 18:49:15.573886  146540 main.go:141] libmachine: (old-k8s-version-834964)     <apic/>
	I0729 18:49:15.573898  146540 main.go:141] libmachine: (old-k8s-version-834964)     <pae/>
	I0729 18:49:15.573914  146540 main.go:141] libmachine: (old-k8s-version-834964)     
	I0729 18:49:15.573927  146540 main.go:141] libmachine: (old-k8s-version-834964)   </features>
	I0729 18:49:15.573938  146540 main.go:141] libmachine: (old-k8s-version-834964)   <cpu mode='host-passthrough'>
	I0729 18:49:15.573949  146540 main.go:141] libmachine: (old-k8s-version-834964)   
	I0729 18:49:15.573957  146540 main.go:141] libmachine: (old-k8s-version-834964)   </cpu>
	I0729 18:49:15.573968  146540 main.go:141] libmachine: (old-k8s-version-834964)   <os>
	I0729 18:49:15.573988  146540 main.go:141] libmachine: (old-k8s-version-834964)     <type>hvm</type>
	I0729 18:49:15.574000  146540 main.go:141] libmachine: (old-k8s-version-834964)     <boot dev='cdrom'/>
	I0729 18:49:15.574013  146540 main.go:141] libmachine: (old-k8s-version-834964)     <boot dev='hd'/>
	I0729 18:49:15.574024  146540 main.go:141] libmachine: (old-k8s-version-834964)     <bootmenu enable='no'/>
	I0729 18:49:15.574036  146540 main.go:141] libmachine: (old-k8s-version-834964)   </os>
	I0729 18:49:15.574045  146540 main.go:141] libmachine: (old-k8s-version-834964)   <devices>
	I0729 18:49:15.574057  146540 main.go:141] libmachine: (old-k8s-version-834964)     <disk type='file' device='cdrom'>
	I0729 18:49:15.574073  146540 main.go:141] libmachine: (old-k8s-version-834964)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/boot2docker.iso'/>
	I0729 18:49:15.574104  146540 main.go:141] libmachine: (old-k8s-version-834964)       <target dev='hdc' bus='scsi'/>
	I0729 18:49:15.574136  146540 main.go:141] libmachine: (old-k8s-version-834964)       <readonly/>
	I0729 18:49:15.574149  146540 main.go:141] libmachine: (old-k8s-version-834964)     </disk>
	I0729 18:49:15.574163  146540 main.go:141] libmachine: (old-k8s-version-834964)     <disk type='file' device='disk'>
	I0729 18:49:15.574187  146540 main.go:141] libmachine: (old-k8s-version-834964)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:49:15.574207  146540 main.go:141] libmachine: (old-k8s-version-834964)       <source file='/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/old-k8s-version-834964.rawdisk'/>
	I0729 18:49:15.574221  146540 main.go:141] libmachine: (old-k8s-version-834964)       <target dev='hda' bus='virtio'/>
	I0729 18:49:15.574232  146540 main.go:141] libmachine: (old-k8s-version-834964)     </disk>
	I0729 18:49:15.574243  146540 main.go:141] libmachine: (old-k8s-version-834964)     <interface type='network'>
	I0729 18:49:15.574255  146540 main.go:141] libmachine: (old-k8s-version-834964)       <source network='mk-old-k8s-version-834964'/>
	I0729 18:49:15.574266  146540 main.go:141] libmachine: (old-k8s-version-834964)       <model type='virtio'/>
	I0729 18:49:15.574277  146540 main.go:141] libmachine: (old-k8s-version-834964)     </interface>
	I0729 18:49:15.574288  146540 main.go:141] libmachine: (old-k8s-version-834964)     <interface type='network'>
	I0729 18:49:15.574300  146540 main.go:141] libmachine: (old-k8s-version-834964)       <source network='default'/>
	I0729 18:49:15.574310  146540 main.go:141] libmachine: (old-k8s-version-834964)       <model type='virtio'/>
	I0729 18:49:15.574321  146540 main.go:141] libmachine: (old-k8s-version-834964)     </interface>
	I0729 18:49:15.574328  146540 main.go:141] libmachine: (old-k8s-version-834964)     <serial type='pty'>
	I0729 18:49:15.574351  146540 main.go:141] libmachine: (old-k8s-version-834964)       <target port='0'/>
	I0729 18:49:15.574374  146540 main.go:141] libmachine: (old-k8s-version-834964)     </serial>
	I0729 18:49:15.574387  146540 main.go:141] libmachine: (old-k8s-version-834964)     <console type='pty'>
	I0729 18:49:15.574399  146540 main.go:141] libmachine: (old-k8s-version-834964)       <target type='serial' port='0'/>
	I0729 18:49:15.574411  146540 main.go:141] libmachine: (old-k8s-version-834964)     </console>
	I0729 18:49:15.574422  146540 main.go:141] libmachine: (old-k8s-version-834964)     <rng model='virtio'>
	I0729 18:49:15.574435  146540 main.go:141] libmachine: (old-k8s-version-834964)       <backend model='random'>/dev/random</backend>
	I0729 18:49:15.574453  146540 main.go:141] libmachine: (old-k8s-version-834964)     </rng>
	I0729 18:49:15.574463  146540 main.go:141] libmachine: (old-k8s-version-834964)     
	I0729 18:49:15.574474  146540 main.go:141] libmachine: (old-k8s-version-834964)     
	I0729 18:49:15.574484  146540 main.go:141] libmachine: (old-k8s-version-834964)   </devices>
	I0729 18:49:15.574493  146540 main.go:141] libmachine: (old-k8s-version-834964) </domain>
	I0729 18:49:15.574507  146540 main.go:141] libmachine: (old-k8s-version-834964) 
	I0729 18:49:15.578320  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:2a:c5:6f in network default
	I0729 18:49:15.578932  146540 main.go:141] libmachine: (old-k8s-version-834964) Ensuring networks are active...
	I0729 18:49:15.578952  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:15.579514  146540 main.go:141] libmachine: (old-k8s-version-834964) Ensuring network default is active
	I0729 18:49:15.579829  146540 main.go:141] libmachine: (old-k8s-version-834964) Ensuring network mk-old-k8s-version-834964 is active
	I0729 18:49:15.580311  146540 main.go:141] libmachine: (old-k8s-version-834964) Getting domain xml...
	I0729 18:49:15.580998  146540 main.go:141] libmachine: (old-k8s-version-834964) Creating domain...
	I0729 18:49:15.912746  146540 main.go:141] libmachine: (old-k8s-version-834964) Waiting to get IP...
	I0729 18:49:15.913676  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:15.914110  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:15.914169  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:15.914099  146562 retry.go:31] will retry after 251.036856ms: waiting for machine to come up
	I0729 18:49:16.166662  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:16.167238  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:16.167268  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:16.167185  146562 retry.go:31] will retry after 319.554537ms: waiting for machine to come up
	I0729 18:49:16.489214  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:16.489867  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:16.489931  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:16.489813  146562 retry.go:31] will retry after 380.351583ms: waiting for machine to come up
	I0729 18:49:16.871223  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:16.871637  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:16.871666  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:16.871611  146562 retry.go:31] will retry after 520.878659ms: waiting for machine to come up
	I0729 18:49:17.394531  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:17.395179  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:17.395210  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:17.395117  146562 retry.go:31] will retry after 498.78577ms: waiting for machine to come up
	I0729 18:49:17.896212  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:17.896768  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:17.896796  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:17.896727  146562 retry.go:31] will retry after 947.89031ms: waiting for machine to come up
	I0729 18:49:18.846306  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:18.846783  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:18.846812  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:18.846729  146562 retry.go:31] will retry after 1.094988115s: waiting for machine to come up
	I0729 18:49:19.943099  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:19.943638  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:19.943661  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:19.943578  146562 retry.go:31] will retry after 1.150294001s: waiting for machine to come up
	I0729 18:49:21.095733  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:21.096301  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:21.096331  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:21.096257  146562 retry.go:31] will retry after 1.517108829s: waiting for machine to come up
	I0729 18:49:22.614640  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:22.615161  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:22.615198  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:22.615112  146562 retry.go:31] will retry after 1.674428756s: waiting for machine to come up
	I0729 18:49:24.291929  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:24.292503  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:24.292532  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:24.292442  146562 retry.go:31] will retry after 2.36408842s: waiting for machine to come up
	I0729 18:49:26.658148  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:26.658699  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:26.658742  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:26.658679  146562 retry.go:31] will retry after 2.634353507s: waiting for machine to come up
	I0729 18:49:29.295754  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:29.296261  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:29.296286  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:29.296217  146562 retry.go:31] will retry after 3.280920537s: waiting for machine to come up
	I0729 18:49:32.579089  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:32.579644  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:49:32.579677  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:49:32.579580  146562 retry.go:31] will retry after 5.185953962s: waiting for machine to come up
	I0729 18:49:37.767023  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:37.767653  146540 main.go:141] libmachine: (old-k8s-version-834964) Found IP for machine: 192.168.61.89
	I0729 18:49:37.767685  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has current primary IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:37.767712  146540 main.go:141] libmachine: (old-k8s-version-834964) Reserving static IP address...
	I0729 18:49:37.768100  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-834964", mac: "52:54:00:60:d4:59", ip: "192.168.61.89"} in network mk-old-k8s-version-834964
	I0729 18:49:37.850682  146540 main.go:141] libmachine: (old-k8s-version-834964) Reserved static IP address: 192.168.61.89
	I0729 18:49:37.850713  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Getting to WaitForSSH function...
	I0729 18:49:37.850722  146540 main.go:141] libmachine: (old-k8s-version-834964) Waiting for SSH to be available...
	I0729 18:49:37.853882  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:37.854282  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:37.854312  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:37.854508  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Using SSH client type: external
	I0729 18:49:37.854536  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa (-rw-------)
	I0729 18:49:37.854564  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:49:37.854577  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | About to run SSH command:
	I0729 18:49:37.854591  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | exit 0
	I0729 18:49:37.985140  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | SSH cmd err, output: <nil>: 
	I0729 18:49:37.985488  146540 main.go:141] libmachine: (old-k8s-version-834964) KVM machine creation complete!
	I0729 18:49:37.985822  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetConfigRaw
	I0729 18:49:37.986430  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:49:37.986690  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:49:37.986945  146540 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:49:37.986967  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetState
	I0729 18:49:37.988624  146540 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:49:37.988638  146540 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:49:37.988643  146540 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:49:37.988652  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:37.991567  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:37.992095  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:37.992119  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:37.992331  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:49:37.992548  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:37.992721  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:37.992922  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:49:37.993137  146540 main.go:141] libmachine: Using SSH client type: native
	I0729 18:49:37.993374  146540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:49:37.993393  146540 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:49:38.104629  146540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:49:38.104659  146540 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:49:38.104672  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:38.107483  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.107797  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:38.107827  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.107985  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:49:38.108198  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:38.108363  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:38.108579  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:49:38.108759  146540 main.go:141] libmachine: Using SSH client type: native
	I0729 18:49:38.109019  146540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:49:38.109033  146540 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:49:38.217901  146540 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:49:38.217970  146540 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:49:38.217984  146540 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:49:38.217996  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetMachineName
	I0729 18:49:38.218231  146540 buildroot.go:166] provisioning hostname "old-k8s-version-834964"
	I0729 18:49:38.218257  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetMachineName
	I0729 18:49:38.218438  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:38.221130  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.221474  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:38.221507  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.221679  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:49:38.221861  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:38.222036  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:38.222191  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:49:38.222354  146540 main.go:141] libmachine: Using SSH client type: native
	I0729 18:49:38.222562  146540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:49:38.222581  146540 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-834964 && echo "old-k8s-version-834964" | sudo tee /etc/hostname
	I0729 18:49:38.342441  146540 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-834964
	
	I0729 18:49:38.342476  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:38.345545  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.345968  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:38.346003  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.346156  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:49:38.346377  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:38.346576  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:38.346745  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:49:38.346951  146540 main.go:141] libmachine: Using SSH client type: native
	I0729 18:49:38.347138  146540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:49:38.347160  146540 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-834964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-834964/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-834964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:49:38.463988  146540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:49:38.464022  146540 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 18:49:38.464045  146540 buildroot.go:174] setting up certificates
	I0729 18:49:38.464057  146540 provision.go:84] configureAuth start
	I0729 18:49:38.464070  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetMachineName
	I0729 18:49:38.464562  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:49:38.467450  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.467763  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:38.467787  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.467991  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:38.470158  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.470478  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:38.470517  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.470612  146540 provision.go:143] copyHostCerts
	I0729 18:49:38.470682  146540 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 18:49:38.470696  146540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:49:38.470778  146540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 18:49:38.470902  146540 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 18:49:38.470914  146540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:49:38.470964  146540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 18:49:38.471039  146540 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 18:49:38.471048  146540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:49:38.471080  146540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 18:49:38.471144  146540 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-834964 san=[127.0.0.1 192.168.61.89 localhost minikube old-k8s-version-834964]
	I0729 18:49:38.635316  146540 provision.go:177] copyRemoteCerts
	I0729 18:49:38.635392  146540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:49:38.635426  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:38.638182  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.638474  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:38.638525  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.638644  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:49:38.638822  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:38.639035  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:49:38.639203  146540 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:49:38.726655  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:49:38.756788  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:49:38.786185  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:49:38.813769  146540 provision.go:87] duration metric: took 349.697036ms to configureAuth
	I0729 18:49:38.813804  146540 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:49:38.814032  146540 config.go:182] Loaded profile config "old-k8s-version-834964": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:49:38.814114  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:38.817260  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.817696  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:38.817728  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:38.818041  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:49:38.818235  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:38.818433  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:38.818605  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:49:38.818766  146540 main.go:141] libmachine: Using SSH client type: native
	I0729 18:49:38.818943  146540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:49:38.818971  146540 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:49:39.117090  146540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:49:39.117115  146540 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:49:39.117124  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetURL
	I0729 18:49:39.118506  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | Using libvirt version 6000000
	I0729 18:49:39.121171  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.121609  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:39.121639  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.121772  146540 main.go:141] libmachine: Docker is up and running!
	I0729 18:49:39.121789  146540 main.go:141] libmachine: Reticulating splines...
	I0729 18:49:39.121797  146540 client.go:171] duration metric: took 24.176547796s to LocalClient.Create
	I0729 18:49:39.121833  146540 start.go:167] duration metric: took 24.176652819s to libmachine.API.Create "old-k8s-version-834964"
	I0729 18:49:39.121844  146540 start.go:293] postStartSetup for "old-k8s-version-834964" (driver="kvm2")
	I0729 18:49:39.121856  146540 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:49:39.121912  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:49:39.122179  146540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:49:39.122232  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:39.124392  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.124769  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:39.124798  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.124993  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:49:39.125153  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:39.125304  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:49:39.125490  146540 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:49:39.215903  146540 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:49:39.220448  146540 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:49:39.220479  146540 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:49:39.220565  146540 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:49:39.220677  146540 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:49:39.220814  146540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:49:39.234460  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:49:39.262028  146540 start.go:296] duration metric: took 140.17115ms for postStartSetup
	I0729 18:49:39.262090  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetConfigRaw
	I0729 18:49:39.262799  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:49:39.266138  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.266563  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:39.266594  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.266853  146540 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/config.json ...
	I0729 18:49:39.267048  146540 start.go:128] duration metric: took 24.341591627s to createHost
	I0729 18:49:39.267083  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:39.269780  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.270146  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:39.270174  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.270357  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:49:39.270548  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:39.270727  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:39.270916  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:49:39.271137  146540 main.go:141] libmachine: Using SSH client type: native
	I0729 18:49:39.271362  146540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:49:39.271377  146540 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 18:49:39.381612  146540 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278979.355638257
	
	I0729 18:49:39.381639  146540 fix.go:216] guest clock: 1722278979.355638257
	I0729 18:49:39.381649  146540 fix.go:229] Guest: 2024-07-29 18:49:39.355638257 +0000 UTC Remote: 2024-07-29 18:49:39.267062072 +0000 UTC m=+24.462629733 (delta=88.576185ms)
	I0729 18:49:39.381694  146540 fix.go:200] guest clock delta is within tolerance: 88.576185ms
	I0729 18:49:39.381702  146540 start.go:83] releasing machines lock for "old-k8s-version-834964", held for 24.456356518s
	I0729 18:49:39.381730  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:49:39.382029  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:49:39.385170  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.385592  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:39.385622  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.385750  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:49:39.386273  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:49:39.386466  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:49:39.386580  146540 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:49:39.386632  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:39.386714  146540 ssh_runner.go:195] Run: cat /version.json
	I0729 18:49:39.386739  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:49:39.389548  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.389783  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.389888  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:39.389926  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.390115  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:49:39.390291  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:39.390317  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:39.390341  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:39.390464  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:49:39.390538  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:49:39.390635  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:49:39.390705  146540 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:49:39.390797  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:49:39.390948  146540 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:49:39.467665  146540 ssh_runner.go:195] Run: systemctl --version
	I0729 18:49:39.490707  146540 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:49:39.663369  146540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:49:39.670351  146540 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:49:39.670425  146540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:49:39.687756  146540 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:49:39.687793  146540 start.go:495] detecting cgroup driver to use...
	I0729 18:49:39.687875  146540 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:49:39.705117  146540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:49:39.719265  146540 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:49:39.719335  146540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:49:39.733922  146540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:49:39.748009  146540 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:49:39.891603  146540 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:49:40.076349  146540 docker.go:233] disabling docker service ...
	I0729 18:49:40.076420  146540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:49:40.096027  146540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:49:40.114598  146540 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:49:40.267221  146540 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:49:40.421513  146540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:49:40.437122  146540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:49:40.457785  146540 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:49:40.457871  146540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:49:40.468801  146540 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:49:40.468894  146540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:49:40.480021  146540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:49:40.493266  146540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:49:40.504183  146540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:49:40.518050  146540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:49:40.528684  146540 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:49:40.528788  146540 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:49:40.543158  146540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:49:40.556670  146540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:49:40.703484  146540 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:49:40.862959  146540 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:49:40.863056  146540 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:49:40.869333  146540 start.go:563] Will wait 60s for crictl version
	I0729 18:49:40.869401  146540 ssh_runner.go:195] Run: which crictl
	I0729 18:49:40.874707  146540 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:49:40.932416  146540 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:49:40.932524  146540 ssh_runner.go:195] Run: crio --version
	I0729 18:49:40.965687  146540 ssh_runner.go:195] Run: crio --version
	I0729 18:49:40.998262  146540 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:49:40.999546  146540 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:49:41.002714  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:41.003123  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:49:29 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:49:41.003155  146540 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:49:41.003358  146540 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 18:49:41.008104  146540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:49:41.021842  146540 kubeadm.go:883] updating cluster {Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:49:41.022016  146540 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:49:41.022084  146540 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:49:41.063295  146540 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:49:41.063378  146540 ssh_runner.go:195] Run: which lz4
	I0729 18:49:41.069194  146540 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 18:49:41.074787  146540 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:49:41.074824  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:49:42.768260  146540 crio.go:462] duration metric: took 1.699119612s to copy over tarball
	I0729 18:49:42.768362  146540 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:49:45.531321  146540 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.762918873s)
	I0729 18:49:45.531358  146540 crio.go:469] duration metric: took 2.763060863s to extract the tarball
	I0729 18:49:45.531369  146540 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:49:45.576885  146540 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:49:45.626637  146540 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:49:45.626664  146540 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:49:45.626746  146540 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:49:45.626799  146540 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:49:45.626791  146540 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:49:45.626888  146540 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:49:45.626904  146540 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:49:45.626829  146540 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:49:45.626981  146540 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:49:45.626741  146540 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:49:45.628365  146540 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:49:45.628366  146540 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:49:45.628367  146540 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:49:45.628589  146540 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:49:45.628626  146540 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:49:45.628367  146540 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:49:45.628367  146540 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:49:45.628540  146540 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:49:45.780358  146540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:49:45.786447  146540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:49:45.787696  146540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:49:45.804402  146540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:49:45.822732  146540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:49:45.822925  146540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:49:45.827398  146540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:49:45.868513  146540 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:49:45.868574  146540 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:49:45.868625  146540 ssh_runner.go:195] Run: which crictl
	I0729 18:49:45.888448  146540 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:49:45.888503  146540 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:49:45.888565  146540 ssh_runner.go:195] Run: which crictl
	I0729 18:49:45.911596  146540 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:49:45.911636  146540 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:49:45.911684  146540 ssh_runner.go:195] Run: which crictl
	I0729 18:49:45.939315  146540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:49:45.963639  146540 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:49:45.963685  146540 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:49:45.963726  146540 ssh_runner.go:195] Run: which crictl
	I0729 18:49:45.988928  146540 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:49:45.988970  146540 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:49:45.988999  146540 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:49:45.989016  146540 ssh_runner.go:195] Run: which crictl
	I0729 18:49:45.989030  146540 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:49:45.989076  146540 ssh_runner.go:195] Run: which crictl
	I0729 18:49:45.989092  146540 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:49:45.989123  146540 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:49:45.989160  146540 ssh_runner.go:195] Run: which crictl
	I0729 18:49:45.989122  146540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:49:45.989172  146540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:49:45.989163  146540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:49:46.136842  146540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:49:46.136951  146540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:49:46.136945  146540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:49:46.137001  146540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:49:46.137018  146540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:49:46.137043  146540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:49:46.137068  146540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:49:46.188411  146540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:49:46.236248  146540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:49:46.236308  146540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:49:46.236417  146540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:49:46.236463  146540 cache_images.go:92] duration metric: took 609.787454ms to LoadCachedImages
	W0729 18:49:46.236553  146540 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0729 18:49:46.236569  146540 kubeadm.go:934] updating node { 192.168.61.89 8443 v1.20.0 crio true true} ...
	I0729 18:49:46.236693  146540 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-834964 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:49:46.236778  146540 ssh_runner.go:195] Run: crio config
	I0729 18:49:46.283028  146540 cni.go:84] Creating CNI manager for ""
	I0729 18:49:46.283051  146540 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:49:46.283063  146540 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:49:46.283086  146540 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-834964 NodeName:old-k8s-version-834964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:49:46.283256  146540 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-834964"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:49:46.283328  146540 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:49:46.293609  146540 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:49:46.293702  146540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:49:46.303864  146540 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:49:46.320996  146540 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:49:46.340046  146540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:49:46.357492  146540 ssh_runner.go:195] Run: grep 192.168.61.89	control-plane.minikube.internal$ /etc/hosts
	I0729 18:49:46.361513  146540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:49:46.374102  146540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:49:46.506358  146540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:49:46.524331  146540 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964 for IP: 192.168.61.89
	I0729 18:49:46.524355  146540 certs.go:194] generating shared ca certs ...
	I0729 18:49:46.524376  146540 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:49:46.524554  146540 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 18:49:46.524618  146540 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 18:49:46.524633  146540 certs.go:256] generating profile certs ...
	I0729 18:49:46.524707  146540 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.key
	I0729 18:49:46.524726  146540 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.crt with IP's: []
	I0729 18:49:46.723183  146540 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.crt ...
	I0729 18:49:46.723221  146540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.crt: {Name:mk4aaa084c1885fbc532e8de7ff3eee7ba262f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:49:46.723448  146540 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.key ...
	I0729 18:49:46.723470  146540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.key: {Name:mk5066563a5318424d4212c115ab473463f669cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:49:46.723601  146540 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.key.34fbf854
	I0729 18:49:46.723626  146540 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.crt.34fbf854 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.89]
	I0729 18:49:46.842123  146540 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.crt.34fbf854 ...
	I0729 18:49:46.842153  146540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.crt.34fbf854: {Name:mk52d7991a2bdedf17a0a6e948da6c0480372cc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:49:46.842344  146540 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.key.34fbf854 ...
	I0729 18:49:46.842362  146540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.key.34fbf854: {Name:mkd53e7d55f46823576cf78817375d78df0ba97a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:49:46.842479  146540 certs.go:381] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.crt.34fbf854 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.crt
	I0729 18:49:46.842619  146540 certs.go:385] copying /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.key.34fbf854 -> /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.key
	I0729 18:49:46.842707  146540 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.key
	I0729 18:49:46.842726  146540 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.crt with IP's: []
	I0729 18:49:46.964604  146540 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.crt ...
	I0729 18:49:46.964638  146540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.crt: {Name:mkdc2d98df9b2c76b92310e6730d32abdae49e10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:49:46.995121  146540 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.key ...
	I0729 18:49:46.995158  146540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.key: {Name:mk397a9ef2540a94be0d1b342569131bbb7b1633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:49:46.995470  146540 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 18:49:46.995531  146540 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 18:49:46.995576  146540 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:49:46.995624  146540 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:49:46.995669  146540 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:49:46.995706  146540 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 18:49:46.995773  146540 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:49:46.996573  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:49:47.025441  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:49:47.050238  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:49:47.075716  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:49:47.103014  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:49:47.127372  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:49:47.151455  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:49:47.195932  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:49:47.223339  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 18:49:47.250564  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:49:47.276877  146540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 18:49:47.302377  146540 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:49:47.320618  146540 ssh_runner.go:195] Run: openssl version
	I0729 18:49:47.326601  146540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 18:49:47.337920  146540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 18:49:47.342650  146540 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:49:47.342698  146540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 18:49:47.348406  146540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:49:47.361733  146540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:49:47.379655  146540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:49:47.385762  146540 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:49:47.385844  146540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:49:47.395125  146540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:49:47.411609  146540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 18:49:47.429416  146540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 18:49:47.434905  146540 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:49:47.434982  146540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 18:49:47.446648  146540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 18:49:47.462619  146540 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:49:47.467810  146540 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:49:47.467873  146540 kubeadm.go:392] StartCluster: {Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:49:47.467969  146540 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:49:47.468026  146540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:49:47.515580  146540 cri.go:89] found id: ""
	I0729 18:49:47.515670  146540 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:49:47.528022  146540 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:49:47.539905  146540 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:49:47.552152  146540 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:49:47.552179  146540 kubeadm.go:157] found existing configuration files:
	
	I0729 18:49:47.552238  146540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:49:47.562494  146540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:49:47.562569  146540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:49:47.572656  146540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:49:47.582868  146540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:49:47.582934  146540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:49:47.593018  146540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:49:47.602442  146540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:49:47.602512  146540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:49:47.612124  146540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:49:47.622342  146540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:49:47.622392  146540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:49:47.633838  146540 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:49:47.753820  146540 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:49:47.753940  146540 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:49:47.907051  146540 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:49:47.907222  146540 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:49:47.907358  146540 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:49:48.114447  146540 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:49:48.253720  146540 out.go:204]   - Generating certificates and keys ...
	I0729 18:49:48.253844  146540 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:49:48.253940  146540 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:49:48.401216  146540 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 18:49:48.589113  146540 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 18:49:48.665863  146540 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 18:49:48.819968  146540 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 18:49:49.088642  146540 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 18:49:49.088865  146540 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-834964] and IPs [192.168.61.89 127.0.0.1 ::1]
	I0729 18:49:49.275573  146540 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 18:49:49.275840  146540 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-834964] and IPs [192.168.61.89 127.0.0.1 ::1]
	I0729 18:49:49.342329  146540 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 18:49:49.415327  146540 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 18:49:49.722222  146540 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 18:49:49.722465  146540 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:49:49.798239  146540 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:49:50.094158  146540 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:49:50.640201  146540 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:49:50.763144  146540 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:49:50.778208  146540 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:49:50.779287  146540 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:49:50.779588  146540 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:49:50.909208  146540 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:49:50.910895  146540 out.go:204]   - Booting up control plane ...
	I0729 18:49:50.911043  146540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:49:50.917846  146540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:49:50.918964  146540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:49:50.920370  146540 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:49:50.924536  146540 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:50:30.918616  146540 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:50:30.920037  146540 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:50:30.920248  146540 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:50:35.920548  146540 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:50:35.920727  146540 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:50:45.919993  146540 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:50:45.920247  146540 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:51:05.919910  146540 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:51:05.920384  146540 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:51:45.920802  146540 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:51:45.921043  146540 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:51:45.921075  146540 kubeadm.go:310] 
	I0729 18:51:45.921150  146540 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:51:45.921219  146540 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:51:45.921229  146540 kubeadm.go:310] 
	I0729 18:51:45.921291  146540 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:51:45.921355  146540 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:51:45.921505  146540 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:51:45.921525  146540 kubeadm.go:310] 
	I0729 18:51:45.921613  146540 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:51:45.921647  146540 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:51:45.921708  146540 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:51:45.921722  146540 kubeadm.go:310] 
	I0729 18:51:45.921916  146540 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:51:45.922048  146540 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:51:45.922059  146540 kubeadm.go:310] 
	I0729 18:51:45.922196  146540 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:51:45.922321  146540 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:51:45.922441  146540 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:51:45.922555  146540 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:51:45.922574  146540 kubeadm.go:310] 
	I0729 18:51:45.923208  146540 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:51:45.923334  146540 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:51:45.923456  146540 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 18:51:45.923563  146540 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-834964] and IPs [192.168.61.89 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-834964] and IPs [192.168.61.89 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-834964] and IPs [192.168.61.89 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-834964] and IPs [192.168.61.89 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 18:51:45.923616  146540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 18:51:46.441791  146540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:51:46.455614  146540 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:51:46.465207  146540 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:51:46.465226  146540 kubeadm.go:157] found existing configuration files:
	
	I0729 18:51:46.465277  146540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:51:46.474646  146540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:51:46.474701  146540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:51:46.484003  146540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:51:46.492752  146540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:51:46.492800  146540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:51:46.502043  146540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:51:46.510723  146540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:51:46.510765  146540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:51:46.519712  146540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:51:46.528411  146540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:51:46.528460  146540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:51:46.537397  146540 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:51:46.604447  146540 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 18:51:46.604516  146540 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:51:46.735054  146540 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:51:46.735179  146540 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:51:46.735304  146540 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:51:46.926175  146540 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:51:46.928137  146540 out.go:204]   - Generating certificates and keys ...
	I0729 18:51:46.928245  146540 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:51:46.928334  146540 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:51:46.928440  146540 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 18:51:46.928550  146540 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 18:51:46.928646  146540 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 18:51:46.928718  146540 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 18:51:46.928818  146540 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 18:51:46.928913  146540 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 18:51:46.929031  146540 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 18:51:46.929143  146540 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 18:51:46.929199  146540 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 18:51:46.929289  146540 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:51:47.116714  146540 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:51:47.197997  146540 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:51:47.466143  146540 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:51:47.621876  146540 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:51:47.636962  146540 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:51:47.639306  146540 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:51:47.639383  146540 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:51:47.769424  146540 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:51:47.771149  146540 out.go:204]   - Booting up control plane ...
	I0729 18:51:47.771252  146540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:51:47.784252  146540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:51:47.785484  146540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:51:47.786468  146540 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:51:47.790222  146540 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 18:52:27.793015  146540 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 18:52:27.793563  146540 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:52:27.793840  146540 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:52:32.794710  146540 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:52:32.794942  146540 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:52:42.795606  146540 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:52:42.795863  146540 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:53:02.794825  146540 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:53:02.795033  146540 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:53:42.794283  146540 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 18:53:42.794565  146540 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 18:53:42.794584  146540 kubeadm.go:310] 
	I0729 18:53:42.794638  146540 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 18:53:42.794689  146540 kubeadm.go:310] 		timed out waiting for the condition
	I0729 18:53:42.794699  146540 kubeadm.go:310] 
	I0729 18:53:42.794743  146540 kubeadm.go:310] 	This error is likely caused by:
	I0729 18:53:42.794813  146540 kubeadm.go:310] 		- The kubelet is not running
	I0729 18:53:42.794926  146540 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 18:53:42.794935  146540 kubeadm.go:310] 
	I0729 18:53:42.795018  146540 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 18:53:42.795074  146540 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 18:53:42.795121  146540 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 18:53:42.795129  146540 kubeadm.go:310] 
	I0729 18:53:42.795267  146540 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 18:53:42.795375  146540 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 18:53:42.795384  146540 kubeadm.go:310] 
	I0729 18:53:42.795516  146540 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 18:53:42.795635  146540 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 18:53:42.795746  146540 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 18:53:42.795865  146540 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 18:53:42.795881  146540 kubeadm.go:310] 
	I0729 18:53:42.796649  146540 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:53:42.796747  146540 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 18:53:42.796940  146540 kubeadm.go:394] duration metric: took 3m55.329071027s to StartCluster
	I0729 18:53:42.796957  146540 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 18:53:42.797015  146540 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:53:42.797080  146540 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:53:42.841289  146540 cri.go:89] found id: ""
	I0729 18:53:42.841312  146540 logs.go:276] 0 containers: []
	W0729 18:53:42.841322  146540 logs.go:278] No container was found matching "kube-apiserver"
	I0729 18:53:42.841330  146540 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:53:42.841382  146540 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:53:42.910641  146540 cri.go:89] found id: ""
	I0729 18:53:42.910666  146540 logs.go:276] 0 containers: []
	W0729 18:53:42.910676  146540 logs.go:278] No container was found matching "etcd"
	I0729 18:53:42.910683  146540 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:53:42.910739  146540 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:53:42.946956  146540 cri.go:89] found id: ""
	I0729 18:53:42.946980  146540 logs.go:276] 0 containers: []
	W0729 18:53:42.946991  146540 logs.go:278] No container was found matching "coredns"
	I0729 18:53:42.946998  146540 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:53:42.947056  146540 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:53:42.982033  146540 cri.go:89] found id: ""
	I0729 18:53:42.982060  146540 logs.go:276] 0 containers: []
	W0729 18:53:42.982070  146540 logs.go:278] No container was found matching "kube-scheduler"
	I0729 18:53:42.982078  146540 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:53:42.982133  146540 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:53:43.030176  146540 cri.go:89] found id: ""
	I0729 18:53:43.030209  146540 logs.go:276] 0 containers: []
	W0729 18:53:43.030220  146540 logs.go:278] No container was found matching "kube-proxy"
	I0729 18:53:43.030227  146540 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:53:43.030288  146540 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:53:43.069413  146540 cri.go:89] found id: ""
	I0729 18:53:43.069443  146540 logs.go:276] 0 containers: []
	W0729 18:53:43.069451  146540 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 18:53:43.069456  146540 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:53:43.069522  146540 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:53:43.103098  146540 cri.go:89] found id: ""
	I0729 18:53:43.103134  146540 logs.go:276] 0 containers: []
	W0729 18:53:43.103146  146540 logs.go:278] No container was found matching "kindnet"
	I0729 18:53:43.103160  146540 logs.go:123] Gathering logs for kubelet ...
	I0729 18:53:43.103175  146540 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 18:53:43.151056  146540 logs.go:123] Gathering logs for dmesg ...
	I0729 18:53:43.151090  146540 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:53:43.164688  146540 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:53:43.164721  146540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 18:53:43.270797  146540 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 18:53:43.270823  146540 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:53:43.270839  146540 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:53:43.368606  146540 logs.go:123] Gathering logs for container status ...
	I0729 18:53:43.368647  146540 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 18:53:43.407083  146540 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 18:53:43.407133  146540 out.go:239] * 
	* 
	W0729 18:53:43.407210  146540 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:53:43.407244  146540 out.go:239] * 
	* 
	W0729 18:53:43.408194  146540 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:53:43.410941  146540 out.go:177] 
	W0729 18:53:43.412025  146540 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 18:53:43.412083  146540 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 18:53:43.412109  146540 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 18:53:43.413905  146540 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-834964 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964
E0729 18:53:43.709385   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 6 (245.179432ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:53:43.700017  151129 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-834964" does not appear in /home/jenkins/minikube-integration/19339-88081/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-834964" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (268.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-524369 --alsologtostderr -v=3
E0729 18:51:44.227544   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:51:49.348531   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:51:59.589242   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:52:09.580596   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:52:16.383267   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-524369 --alsologtostderr -v=3: exit status 82 (2m0.492810195s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-524369"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:51:42.389693  150504 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:51:42.389819  150504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:51:42.389830  150504 out.go:304] Setting ErrFile to fd 2...
	I0729 18:51:42.389836  150504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:51:42.390339  150504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:51:42.390743  150504 out.go:298] Setting JSON to false
	I0729 18:51:42.390872  150504 mustload.go:65] Loading cluster: no-preload-524369
	I0729 18:51:42.391726  150504 config.go:182] Loaded profile config "no-preload-524369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:51:42.391949  150504 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/config.json ...
	I0729 18:51:42.392166  150504 mustload.go:65] Loading cluster: no-preload-524369
	I0729 18:51:42.392280  150504 config.go:182] Loaded profile config "no-preload-524369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 18:51:42.392314  150504 stop.go:39] StopHost: no-preload-524369
	I0729 18:51:42.392675  150504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:51:42.392712  150504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:51:42.407225  150504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I0729 18:51:42.407642  150504 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:51:42.408178  150504 main.go:141] libmachine: Using API Version  1
	I0729 18:51:42.408202  150504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:51:42.408547  150504 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:51:42.410859  150504 out.go:177] * Stopping node "no-preload-524369"  ...
	I0729 18:51:42.412048  150504 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 18:51:42.412083  150504 main.go:141] libmachine: (no-preload-524369) Calling .DriverName
	I0729 18:51:42.412323  150504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 18:51:42.412350  150504 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHHostname
	I0729 18:51:42.415480  150504 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:51:42.415837  150504 main.go:141] libmachine: (no-preload-524369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:73:ec", ip: ""} in network mk-no-preload-524369: {Iface:virbr2 ExpiryTime:2024-07-29 19:50:34 +0000 UTC Type:0 Mac:52:54:00:16:73:ec Iaid: IPaddr:192.168.72.7 Prefix:24 Hostname:no-preload-524369 Clientid:01:52:54:00:16:73:ec}
	I0729 18:51:42.415869  150504 main.go:141] libmachine: (no-preload-524369) DBG | domain no-preload-524369 has defined IP address 192.168.72.7 and MAC address 52:54:00:16:73:ec in network mk-no-preload-524369
	I0729 18:51:42.416015  150504 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHPort
	I0729 18:51:42.416198  150504 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHKeyPath
	I0729 18:51:42.416350  150504 main.go:141] libmachine: (no-preload-524369) Calling .GetSSHUsername
	I0729 18:51:42.416506  150504 sshutil.go:53] new ssh client: &{IP:192.168.72.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/no-preload-524369/id_rsa Username:docker}
	I0729 18:51:42.513048  150504 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 18:51:42.584642  150504 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 18:51:42.635707  150504 main.go:141] libmachine: Stopping "no-preload-524369"...
	I0729 18:51:42.635738  150504 main.go:141] libmachine: (no-preload-524369) Calling .GetState
	I0729 18:51:42.637357  150504 main.go:141] libmachine: (no-preload-524369) Calling .Stop
	I0729 18:51:42.641230  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 0/120
	I0729 18:51:43.642549  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 1/120
	I0729 18:51:44.643740  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 2/120
	I0729 18:51:45.645127  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 3/120
	I0729 18:51:46.647427  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 4/120
	I0729 18:51:47.649841  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 5/120
	I0729 18:51:48.651275  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 6/120
	I0729 18:51:49.652480  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 7/120
	I0729 18:51:50.653836  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 8/120
	I0729 18:51:51.655387  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 9/120
	I0729 18:51:52.657547  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 10/120
	I0729 18:51:53.659569  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 11/120
	I0729 18:51:54.661094  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 12/120
	I0729 18:51:55.663393  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 13/120
	I0729 18:51:56.665104  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 14/120
	I0729 18:51:57.666804  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 15/120
	I0729 18:51:58.668137  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 16/120
	I0729 18:51:59.669634  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 17/120
	I0729 18:52:00.671259  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 18/120
	I0729 18:52:01.672611  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 19/120
	I0729 18:52:02.674541  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 20/120
	I0729 18:52:03.675774  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 21/120
	I0729 18:52:04.677391  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 22/120
	I0729 18:52:05.679607  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 23/120
	I0729 18:52:06.681236  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 24/120
	I0729 18:52:07.682738  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 25/120
	I0729 18:52:08.684153  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 26/120
	I0729 18:52:09.685554  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 27/120
	I0729 18:52:10.686921  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 28/120
	I0729 18:52:11.688153  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 29/120
	I0729 18:52:12.690754  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 30/120
	I0729 18:52:13.692094  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 31/120
	I0729 18:52:14.693950  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 32/120
	I0729 18:52:15.695269  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 33/120
	I0729 18:52:16.697251  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 34/120
	I0729 18:52:17.699410  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 35/120
	I0729 18:52:18.700601  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 36/120
	I0729 18:52:19.702077  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 37/120
	I0729 18:52:20.703550  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 38/120
	I0729 18:52:21.704922  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 39/120
	I0729 18:52:22.707099  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 40/120
	I0729 18:52:23.708324  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 41/120
	I0729 18:52:24.709905  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 42/120
	I0729 18:52:25.711461  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 43/120
	I0729 18:52:26.712812  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 44/120
	I0729 18:52:27.714148  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 45/120
	I0729 18:52:28.715786  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 46/120
	I0729 18:52:29.717566  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 47/120
	I0729 18:52:30.719081  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 48/120
	I0729 18:52:31.720401  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 49/120
	I0729 18:52:32.722209  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 50/120
	I0729 18:52:33.723739  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 51/120
	I0729 18:52:34.725200  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 52/120
	I0729 18:52:35.727577  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 53/120
	I0729 18:52:36.728744  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 54/120
	I0729 18:52:37.730062  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 55/120
	I0729 18:52:38.731473  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 56/120
	I0729 18:52:39.732853  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 57/120
	I0729 18:52:40.734396  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 58/120
	I0729 18:52:41.735827  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 59/120
	I0729 18:52:42.738232  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 60/120
	I0729 18:52:43.739805  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 61/120
	I0729 18:52:44.741445  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 62/120
	I0729 18:52:45.742852  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 63/120
	I0729 18:52:46.744242  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 64/120
	I0729 18:52:47.746177  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 65/120
	I0729 18:52:48.747621  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 66/120
	I0729 18:52:49.749169  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 67/120
	I0729 18:52:50.750617  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 68/120
	I0729 18:52:51.751911  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 69/120
	I0729 18:52:52.754284  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 70/120
	I0729 18:52:53.755593  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 71/120
	I0729 18:52:54.757209  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 72/120
	I0729 18:52:55.758455  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 73/120
	I0729 18:52:56.759653  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 74/120
	I0729 18:52:57.761804  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 75/120
	I0729 18:52:58.763368  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 76/120
	I0729 18:52:59.764945  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 77/120
	I0729 18:53:00.766347  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 78/120
	I0729 18:53:01.767790  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 79/120
	I0729 18:53:02.770242  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 80/120
	I0729 18:53:03.771936  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 81/120
	I0729 18:53:04.773305  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 82/120
	I0729 18:53:05.774589  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 83/120
	I0729 18:53:06.775981  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 84/120
	I0729 18:53:07.778059  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 85/120
	I0729 18:53:08.779521  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 86/120
	I0729 18:53:09.781029  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 87/120
	I0729 18:53:10.782402  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 88/120
	I0729 18:53:11.783785  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 89/120
	I0729 18:53:12.785839  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 90/120
	I0729 18:53:13.787396  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 91/120
	I0729 18:53:14.788882  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 92/120
	I0729 18:53:15.790192  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 93/120
	I0729 18:53:16.791645  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 94/120
	I0729 18:53:17.793699  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 95/120
	I0729 18:53:18.795123  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 96/120
	I0729 18:53:19.796494  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 97/120
	I0729 18:53:20.798460  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 98/120
	I0729 18:53:21.800147  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 99/120
	I0729 18:53:22.802542  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 100/120
	I0729 18:53:23.803733  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 101/120
	I0729 18:53:24.805228  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 102/120
	I0729 18:53:25.806655  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 103/120
	I0729 18:53:26.808027  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 104/120
	I0729 18:53:27.810171  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 105/120
	I0729 18:53:28.811492  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 106/120
	I0729 18:53:29.812832  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 107/120
	I0729 18:53:30.814385  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 108/120
	I0729 18:53:31.815694  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 109/120
	I0729 18:53:32.818008  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 110/120
	I0729 18:53:33.819229  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 111/120
	I0729 18:53:34.820630  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 112/120
	I0729 18:53:35.822038  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 113/120
	I0729 18:53:36.823364  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 114/120
	I0729 18:53:37.825119  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 115/120
	I0729 18:53:38.826524  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 116/120
	I0729 18:53:39.827877  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 117/120
	I0729 18:53:40.829378  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 118/120
	I0729 18:53:41.830617  150504 main.go:141] libmachine: (no-preload-524369) Waiting for machine to stop 119/120
	I0729 18:53:42.831889  150504 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 18:53:42.831978  150504 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 18:53:42.834041  150504 out.go:177] 
	W0729 18:53:42.835290  150504 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 18:53:42.835313  150504 out.go:239] * 
	* 
	W0729 18:53:42.839743  150504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:53:42.840938  150504 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-524369 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-524369 -n no-preload-524369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-524369 -n no-preload-524369: exit status 3 (18.447128481s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:54:01.289250  151097 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.7:22: connect: no route to host
	E0729 18:54:01.289269  151097 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-524369" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-612270 --alsologtostderr -v=3
E0729 18:52:40.400320   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:40.405601   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:40.415850   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:40.436174   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:40.476935   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:40.557307   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:40.717828   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:41.039034   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:41.679694   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:42.960390   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:45.521295   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:46.273799   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:46.279063   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:46.289306   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:46.309575   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:46.349888   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:46.430205   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:46.590652   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:46.911699   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:47.552823   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:48.833008   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:50.641731   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:52:51.393458   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:52:56.514133   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:53:00.882791   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:53:01.030112   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:53:06.754999   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:53:18.903587   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 18:53:21.363285   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:53:27.235806   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:53:31.500997   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:53:38.590477   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:53:38.595737   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:53:38.605971   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:53:38.626226   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:53:38.666467   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:53:38.746826   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:53:38.907068   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:53:39.227818   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:53:39.868228   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:53:41.148981   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-612270 --alsologtostderr -v=3: exit status 82 (2m0.504276855s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-612270"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:52:30.183977  150777 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:52:30.184088  150777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:52:30.184099  150777 out.go:304] Setting ErrFile to fd 2...
	I0729 18:52:30.184104  150777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:52:30.184293  150777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:52:30.184518  150777 out.go:298] Setting JSON to false
	I0729 18:52:30.184592  150777 mustload.go:65] Loading cluster: default-k8s-diff-port-612270
	I0729 18:52:30.184957  150777 config.go:182] Loaded profile config "default-k8s-diff-port-612270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:52:30.185033  150777 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/config.json ...
	I0729 18:52:30.185195  150777 mustload.go:65] Loading cluster: default-k8s-diff-port-612270
	I0729 18:52:30.185292  150777 config.go:182] Loaded profile config "default-k8s-diff-port-612270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:52:30.185324  150777 stop.go:39] StopHost: default-k8s-diff-port-612270
	I0729 18:52:30.185722  150777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:52:30.185766  150777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:52:30.200265  150777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I0729 18:52:30.200799  150777 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:52:30.201398  150777 main.go:141] libmachine: Using API Version  1
	I0729 18:52:30.201418  150777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:52:30.201764  150777 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:52:30.204131  150777 out.go:177] * Stopping node "default-k8s-diff-port-612270"  ...
	I0729 18:52:30.205467  150777 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 18:52:30.205508  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .DriverName
	I0729 18:52:30.205745  150777 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 18:52:30.205771  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHHostname
	I0729 18:52:30.208636  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:52:30.209069  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:29:74", ip: ""} in network mk-default-k8s-diff-port-612270: {Iface:virbr3 ExpiryTime:2024-07-29 19:50:58 +0000 UTC Type:0 Mac:52:54:00:e1:29:74 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:default-k8s-diff-port-612270 Clientid:01:52:54:00:e1:29:74}
	I0729 18:52:30.209097  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) DBG | domain default-k8s-diff-port-612270 has defined IP address 192.168.39.152 and MAC address 52:54:00:e1:29:74 in network mk-default-k8s-diff-port-612270
	I0729 18:52:30.209295  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHPort
	I0729 18:52:30.209470  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHKeyPath
	I0729 18:52:30.209646  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetSSHUsername
	I0729 18:52:30.209793  150777 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/default-k8s-diff-port-612270/id_rsa Username:docker}
	I0729 18:52:30.315841  150777 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 18:52:30.377684  150777 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 18:52:30.441350  150777 main.go:141] libmachine: Stopping "default-k8s-diff-port-612270"...
	I0729 18:52:30.441398  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .GetState
	I0729 18:52:30.443183  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Calling .Stop
	I0729 18:52:30.446454  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 0/120
	I0729 18:52:31.447972  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 1/120
	I0729 18:52:32.449500  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 2/120
	I0729 18:52:33.450775  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 3/120
	I0729 18:52:34.452083  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 4/120
	I0729 18:52:35.454369  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 5/120
	I0729 18:52:36.455714  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 6/120
	I0729 18:52:37.457102  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 7/120
	I0729 18:52:38.458647  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 8/120
	I0729 18:52:39.460149  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 9/120
	I0729 18:52:40.461688  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 10/120
	I0729 18:52:41.463175  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 11/120
	I0729 18:52:42.464456  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 12/120
	I0729 18:52:43.465840  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 13/120
	I0729 18:52:44.467224  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 14/120
	I0729 18:52:45.468959  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 15/120
	I0729 18:52:46.470300  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 16/120
	I0729 18:52:47.471471  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 17/120
	I0729 18:52:48.472647  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 18/120
	I0729 18:52:49.474018  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 19/120
	I0729 18:52:50.476273  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 20/120
	I0729 18:52:51.478480  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 21/120
	I0729 18:52:52.479840  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 22/120
	I0729 18:52:53.481122  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 23/120
	I0729 18:52:54.483135  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 24/120
	I0729 18:52:55.484980  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 25/120
	I0729 18:52:56.486800  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 26/120
	I0729 18:52:57.488012  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 27/120
	I0729 18:52:58.489225  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 28/120
	I0729 18:52:59.491499  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 29/120
	I0729 18:53:00.493904  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 30/120
	I0729 18:53:01.495379  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 31/120
	I0729 18:53:02.496965  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 32/120
	I0729 18:53:03.498411  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 33/120
	I0729 18:53:04.499757  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 34/120
	I0729 18:53:05.501938  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 35/120
	I0729 18:53:06.503446  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 36/120
	I0729 18:53:07.504916  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 37/120
	I0729 18:53:08.506245  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 38/120
	I0729 18:53:09.507771  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 39/120
	I0729 18:53:10.510071  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 40/120
	I0729 18:53:11.511627  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 41/120
	I0729 18:53:12.513082  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 42/120
	I0729 18:53:13.515588  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 43/120
	I0729 18:53:14.516970  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 44/120
	I0729 18:53:15.519169  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 45/120
	I0729 18:53:16.520629  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 46/120
	I0729 18:53:17.522241  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 47/120
	I0729 18:53:18.523411  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 48/120
	I0729 18:53:19.524781  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 49/120
	I0729 18:53:20.527187  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 50/120
	I0729 18:53:21.528573  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 51/120
	I0729 18:53:22.530375  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 52/120
	I0729 18:53:23.531682  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 53/120
	I0729 18:53:24.533202  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 54/120
	I0729 18:53:25.535288  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 55/120
	I0729 18:53:26.536770  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 56/120
	I0729 18:53:27.538288  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 57/120
	I0729 18:53:28.539896  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 58/120
	I0729 18:53:29.541437  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 59/120
	I0729 18:53:30.543405  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 60/120
	I0729 18:53:31.544954  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 61/120
	I0729 18:53:32.546383  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 62/120
	I0729 18:53:33.547772  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 63/120
	I0729 18:53:34.549275  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 64/120
	I0729 18:53:35.551623  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 65/120
	I0729 18:53:36.552936  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 66/120
	I0729 18:53:37.554660  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 67/120
	I0729 18:53:38.556047  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 68/120
	I0729 18:53:39.557777  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 69/120
	I0729 18:53:40.560055  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 70/120
	I0729 18:53:41.561465  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 71/120
	I0729 18:53:42.562848  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 72/120
	I0729 18:53:43.564093  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 73/120
	I0729 18:53:44.565458  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 74/120
	I0729 18:53:45.567681  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 75/120
	I0729 18:53:46.569240  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 76/120
	I0729 18:53:47.570572  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 77/120
	I0729 18:53:48.571957  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 78/120
	I0729 18:53:49.573206  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 79/120
	I0729 18:53:50.575325  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 80/120
	I0729 18:53:51.576545  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 81/120
	I0729 18:53:52.578013  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 82/120
	I0729 18:53:53.579297  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 83/120
	I0729 18:53:54.580811  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 84/120
	I0729 18:53:55.582859  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 85/120
	I0729 18:53:56.584190  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 86/120
	I0729 18:53:57.585578  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 87/120
	I0729 18:53:58.586902  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 88/120
	I0729 18:53:59.588550  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 89/120
	I0729 18:54:00.590653  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 90/120
	I0729 18:54:01.591987  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 91/120
	I0729 18:54:02.593381  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 92/120
	I0729 18:54:03.594742  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 93/120
	I0729 18:54:04.596068  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 94/120
	I0729 18:54:05.598242  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 95/120
	I0729 18:54:06.599899  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 96/120
	I0729 18:54:07.601233  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 97/120
	I0729 18:54:08.602555  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 98/120
	I0729 18:54:09.604022  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 99/120
	I0729 18:54:10.606550  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 100/120
	I0729 18:54:11.608108  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 101/120
	I0729 18:54:12.609688  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 102/120
	I0729 18:54:13.610987  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 103/120
	I0729 18:54:14.612275  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 104/120
	I0729 18:54:15.613651  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 105/120
	I0729 18:54:16.614863  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 106/120
	I0729 18:54:17.616041  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 107/120
	I0729 18:54:18.617223  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 108/120
	I0729 18:54:19.619231  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 109/120
	I0729 18:54:20.621289  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 110/120
	I0729 18:54:21.623217  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 111/120
	I0729 18:54:22.624438  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 112/120
	I0729 18:54:23.625867  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 113/120
	I0729 18:54:24.627212  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 114/120
	I0729 18:54:25.628756  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 115/120
	I0729 18:54:26.630200  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 116/120
	I0729 18:54:27.631438  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 117/120
	I0729 18:54:28.633009  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 118/120
	I0729 18:54:29.634279  150777 main.go:141] libmachine: (default-k8s-diff-port-612270) Waiting for machine to stop 119/120
	I0729 18:54:30.635601  150777 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 18:54:30.635678  150777 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 18:54:30.637535  150777 out.go:177] 
	W0729 18:54:30.638753  150777 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 18:54:30.638769  150777 out.go:239] * 
	* 
	W0729 18:54:30.642112  150777 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:54:30.643804  150777 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-612270 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270: exit status 3 (18.51523986s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:54:49.161187  151545 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.152:22: connect: no route to host
	E0729 18:54:49.161205  151545 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.152:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-612270" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-834964 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-834964 create -f testdata/busybox.yaml: exit status 1 (43.852713ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-834964" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-834964 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 6 (215.099708ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:53:43.959301  151169 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-834964" does not appear in /home/jenkins/minikube-integration/19339-88081/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-834964" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 6 (211.540163ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:53:44.171712  151211 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-834964" does not appear in /home/jenkins/minikube-integration/19339-88081/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-834964" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (113.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-834964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0729 18:53:48.829649   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:53:59.070108   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-834964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m53.639757908s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-834964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-834964 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-834964 describe deploy/metrics-server -n kube-system: exit status 1 (43.938503ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-834964" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-834964 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 6 (212.903088ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:55:38.068064  151948 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-834964" does not appear in /home/jenkins/minikube-integration/19339-88081/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-834964" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (113.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-524369 -n no-preload-524369
E0729 18:54:02.324151   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-524369 -n no-preload-524369: exit status 3 (3.166312806s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:54:04.457185  151324 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.7:22: connect: no route to host
	E0729 18:54:04.457208  151324 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.7:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-524369 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 18:54:08.196250   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-524369 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154397596s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.7:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-524369 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-524369 -n no-preload-524369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-524369 -n no-preload-524369: exit status 3 (3.061415032s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:54:13.673272  151405 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.7:22: connect: no route to host
	E0729 18:54:13.673296  151405 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-524369" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270: exit status 3 (3.167740341s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:54:52.329153  151657 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.152:22: connect: no route to host
	E0729 18:54:52.329175  151657 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.152:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-612270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 18:54:53.656635   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:54:53.661964   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:54:53.672258   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:54:53.692515   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:54:53.732884   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:54:53.813218   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:54:53.973648   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:54:54.294305   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:54:54.935405   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:54:56.215988   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-612270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153787301s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.152:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-612270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270
E0729 18:54:58.776458   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:55:00.511324   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270: exit status 3 (3.062180451s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:55:01.545238  151721 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.152:22: connect: no route to host
	E0729 18:55:01.545257  151721 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.152:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-612270" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (723.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-834964 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0729 18:55:45.489738   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:47.657077   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:55:53.334690   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 18:56:15.341690   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:56:15.580030   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:56:22.432227   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:56:26.450463   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:56:39.107163   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:57:06.791777   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:57:37.501010   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:57:40.400264   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:57:46.273894   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:57:48.371270   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:58:08.085732   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:58:13.957271   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:58:18.903862   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 18:58:38.590402   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:59:06.273535   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:59:41.951888   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 18:59:53.657077   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 19:00:04.527968   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 19:00:21.341425   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 19:00:32.211523   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 19:00:47.657077   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 19:00:53.334578   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-834964 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m2.516580903s)

                                                
                                                
-- stdout --
	* [old-k8s-version-834964] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-834964" primary control-plane node in "old-k8s-version-834964" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-834964" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:55:39.585743  152077 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:55:39.585990  152077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:55:39.586005  152077 out.go:304] Setting ErrFile to fd 2...
	I0729 18:55:39.586013  152077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:55:39.586221  152077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:55:39.586753  152077 out.go:298] Setting JSON to false
	I0729 18:55:39.587710  152077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13060,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:55:39.587771  152077 start.go:139] virtualization: kvm guest
	I0729 18:55:39.589466  152077 out.go:177] * [old-k8s-version-834964] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:55:39.590918  152077 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:55:39.590970  152077 notify.go:220] Checking for updates...
	I0729 18:55:39.593175  152077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:55:39.594395  152077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:55:39.595489  152077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:55:39.596514  152077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:55:39.597494  152077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:55:39.598986  152077 config.go:182] Loaded profile config "old-k8s-version-834964": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:55:39.599586  152077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:55:39.599662  152077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:55:39.614383  152077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0729 18:55:39.614780  152077 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:55:39.615251  152077 main.go:141] libmachine: Using API Version  1
	I0729 18:55:39.615272  152077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:55:39.615579  152077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:55:39.615785  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:55:39.617440  152077 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 18:55:39.618461  152077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:55:39.618765  152077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:55:39.618806  152077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:55:39.632923  152077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I0729 18:55:39.633257  152077 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:55:39.633631  152077 main.go:141] libmachine: Using API Version  1
	I0729 18:55:39.633650  152077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:55:39.633958  152077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:55:39.634132  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:55:39.667892  152077 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:55:39.669026  152077 start.go:297] selected driver: kvm2
	I0729 18:55:39.669040  152077 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:55:39.669173  152077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:55:39.669961  152077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:55:39.670042  152077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:55:39.684510  152077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:55:39.684981  152077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:55:39.685056  152077 cni.go:84] Creating CNI manager for ""
	I0729 18:55:39.685074  152077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:55:39.685129  152077 start.go:340] cluster config:
	{Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:55:39.685275  152077 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:55:39.687045  152077 out.go:177] * Starting "old-k8s-version-834964" primary control-plane node in "old-k8s-version-834964" cluster
	I0729 18:55:39.688350  152077 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:55:39.688383  152077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:55:39.688393  152077 cache.go:56] Caching tarball of preloaded images
	I0729 18:55:39.688471  152077 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:55:39.688484  152077 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 18:55:39.688615  152077 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/config.json ...
	I0729 18:55:39.688812  152077 start.go:360] acquireMachinesLock for old-k8s-version-834964: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:59:10.725424  152077 start.go:364] duration metric: took 3m31.036575503s to acquireMachinesLock for "old-k8s-version-834964"
	I0729 18:59:10.725504  152077 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:59:10.725513  152077 fix.go:54] fixHost starting: 
	I0729 18:59:10.726151  152077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:59:10.726198  152077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:59:10.742782  152077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0729 18:59:10.743229  152077 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:59:10.743775  152077 main.go:141] libmachine: Using API Version  1
	I0729 18:59:10.743810  152077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:59:10.744116  152077 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:59:10.744309  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:10.744484  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetState
	I0729 18:59:10.745829  152077 fix.go:112] recreateIfNeeded on old-k8s-version-834964: state=Stopped err=<nil>
	I0729 18:59:10.745859  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	W0729 18:59:10.746000  152077 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:59:10.748309  152077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-834964" ...
	I0729 18:59:10.749572  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .Start
	I0729 18:59:10.749851  152077 main.go:141] libmachine: (old-k8s-version-834964) Ensuring networks are active...
	I0729 18:59:10.750619  152077 main.go:141] libmachine: (old-k8s-version-834964) Ensuring network default is active
	I0729 18:59:10.750954  152077 main.go:141] libmachine: (old-k8s-version-834964) Ensuring network mk-old-k8s-version-834964 is active
	I0729 18:59:10.751344  152077 main.go:141] libmachine: (old-k8s-version-834964) Getting domain xml...
	I0729 18:59:10.752108  152077 main.go:141] libmachine: (old-k8s-version-834964) Creating domain...
	I0729 18:59:11.103179  152077 main.go:141] libmachine: (old-k8s-version-834964) Waiting to get IP...
	I0729 18:59:11.104133  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:11.104682  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:11.104757  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:11.104644  152890 retry.go:31] will retry after 259.266842ms: waiting for machine to come up
	I0729 18:59:11.365299  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:11.365916  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:11.365943  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:11.365862  152890 retry.go:31] will retry after 274.029734ms: waiting for machine to come up
	I0729 18:59:11.641428  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:11.641885  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:11.641910  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:11.641824  152890 retry.go:31] will retry after 363.716855ms: waiting for machine to come up
	I0729 18:59:12.007550  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:12.008200  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:12.008226  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:12.008158  152890 retry.go:31] will retry after 537.4279ms: waiting for machine to come up
	I0729 18:59:12.546892  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:12.547573  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:12.547605  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:12.547529  152890 retry.go:31] will retry after 756.011995ms: waiting for machine to come up
	I0729 18:59:13.305557  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:13.306344  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:13.306382  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:13.306295  152890 retry.go:31] will retry after 949.340755ms: waiting for machine to come up
	I0729 18:59:14.257589  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:14.258115  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:14.258148  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:14.258059  152890 retry.go:31] will retry after 1.148418352s: waiting for machine to come up
	I0729 18:59:15.408710  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:15.409421  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:15.409444  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:15.409376  152890 retry.go:31] will retry after 1.205038454s: waiting for machine to come up
	I0729 18:59:16.615884  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:16.616362  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:16.616388  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:16.616324  152890 retry.go:31] will retry after 1.590208101s: waiting for machine to come up
	I0729 18:59:18.209022  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:18.209539  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:18.209566  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:18.209487  152890 retry.go:31] will retry after 2.104289607s: waiting for machine to come up
	I0729 18:59:20.315121  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:20.315731  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:20.315801  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:20.315678  152890 retry.go:31] will retry after 1.989233363s: waiting for machine to come up
	I0729 18:59:22.307337  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:22.307892  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:22.307923  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:22.307834  152890 retry.go:31] will retry after 3.487502857s: waiting for machine to come up
	I0729 18:59:25.797201  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:25.797736  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | unable to find current IP address of domain old-k8s-version-834964 in network mk-old-k8s-version-834964
	I0729 18:59:25.797780  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | I0729 18:59:25.797650  152890 retry.go:31] will retry after 3.345863727s: waiting for machine to come up
	I0729 18:59:29.147040  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.147581  152077 main.go:141] libmachine: (old-k8s-version-834964) Found IP for machine: 192.168.61.89
	I0729 18:59:29.147605  152077 main.go:141] libmachine: (old-k8s-version-834964) Reserving static IP address...
	I0729 18:59:29.147620  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has current primary IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.147994  152077 main.go:141] libmachine: (old-k8s-version-834964) Reserved static IP address: 192.168.61.89
	I0729 18:59:29.148031  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "old-k8s-version-834964", mac: "52:54:00:60:d4:59", ip: "192.168.61.89"} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.148049  152077 main.go:141] libmachine: (old-k8s-version-834964) Waiting for SSH to be available...
	I0729 18:59:29.148090  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | skip adding static IP to network mk-old-k8s-version-834964 - found existing host DHCP lease matching {name: "old-k8s-version-834964", mac: "52:54:00:60:d4:59", ip: "192.168.61.89"}
	I0729 18:59:29.148105  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | Getting to WaitForSSH function...
	I0729 18:59:29.150384  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.150778  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.150806  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.150940  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | Using SSH client type: external
	I0729 18:59:29.150987  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa (-rw-------)
	I0729 18:59:29.151026  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:59:29.151043  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | About to run SSH command:
	I0729 18:59:29.151056  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | exit 0
	I0729 18:59:29.272649  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | SSH cmd err, output: <nil>: 
	I0729 18:59:29.273065  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetConfigRaw
	I0729 18:59:29.273787  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:59:29.276070  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.276427  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.276450  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.276734  152077 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/config.json ...
	I0729 18:59:29.276954  152077 machine.go:94] provisionDockerMachine start ...
	I0729 18:59:29.276973  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:29.277164  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.279157  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.279493  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.279518  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.279679  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:29.279845  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.279977  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.280130  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:29.280282  152077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:29.280469  152077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:59:29.280481  152077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:59:29.376976  152077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 18:59:29.377010  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetMachineName
	I0729 18:59:29.377308  152077 buildroot.go:166] provisioning hostname "old-k8s-version-834964"
	I0729 18:59:29.377334  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetMachineName
	I0729 18:59:29.377543  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.380045  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.380366  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.380395  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.380510  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:29.380668  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.380782  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.380919  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:29.381098  152077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:29.381267  152077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:59:29.381283  152077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-834964 && echo "old-k8s-version-834964" | sudo tee /etc/hostname
	I0729 18:59:29.495056  152077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-834964
	
	I0729 18:59:29.495080  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.497946  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.498325  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.498357  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.498560  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:29.498766  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.498930  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.499047  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:29.499173  152077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:29.499353  152077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:59:29.499371  152077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-834964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-834964/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-834964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:59:29.606227  152077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:59:29.606269  152077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 18:59:29.606313  152077 buildroot.go:174] setting up certificates
	I0729 18:59:29.606326  152077 provision.go:84] configureAuth start
	I0729 18:59:29.606341  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetMachineName
	I0729 18:59:29.606655  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:59:29.609303  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.609706  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.609730  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.609861  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.612198  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.612587  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.612610  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.612731  152077 provision.go:143] copyHostCerts
	I0729 18:59:29.612780  152077 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 18:59:29.612789  152077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 18:59:29.612846  152077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 18:59:29.612964  152077 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 18:59:29.612976  152077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 18:59:29.612999  152077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 18:59:29.613054  152077 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 18:59:29.613061  152077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 18:59:29.613077  152077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 18:59:29.613123  152077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-834964 san=[127.0.0.1 192.168.61.89 localhost minikube old-k8s-version-834964]
	I0729 18:59:29.705910  152077 provision.go:177] copyRemoteCerts
	I0729 18:59:29.705976  152077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:59:29.706002  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.708478  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.708809  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.708845  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.709012  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:29.709191  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.709356  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:29.709462  152077 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:59:29.786569  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 18:59:29.810631  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 18:59:29.833915  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:59:29.857384  152077 provision.go:87] duration metric: took 251.042624ms to configureAuth
	I0729 18:59:29.857416  152077 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:59:29.857640  152077 config.go:182] Loaded profile config "old-k8s-version-834964": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 18:59:29.857738  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:29.860583  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.860937  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:29.860961  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:29.861218  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:29.861424  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.861551  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:29.861714  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:29.861845  152077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:29.862041  152077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:59:29.862061  152077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:59:30.113352  152077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:59:30.113379  152077 machine.go:97] duration metric: took 836.410672ms to provisionDockerMachine
	I0729 18:59:30.113393  152077 start.go:293] postStartSetup for "old-k8s-version-834964" (driver="kvm2")
	I0729 18:59:30.113406  152077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:59:30.113427  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:30.113736  152077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:59:30.113767  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:30.116368  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.116721  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:30.116747  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.116952  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:30.117148  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:30.117308  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:30.117414  152077 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:59:30.195069  152077 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:59:30.199201  152077 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:59:30.199219  152077 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 18:59:30.199279  152077 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 18:59:30.199374  152077 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 18:59:30.199479  152077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:59:30.208616  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:59:30.234943  152077 start.go:296] duration metric: took 121.530806ms for postStartSetup
	I0729 18:59:30.234985  152077 fix.go:56] duration metric: took 19.509472409s for fixHost
	I0729 18:59:30.235004  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:30.237789  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.238195  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:30.238226  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.238369  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:30.238535  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:30.238701  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:30.238892  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:30.239065  152077 main.go:141] libmachine: Using SSH client type: native
	I0729 18:59:30.239288  152077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0729 18:59:30.239302  152077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 18:59:30.342059  152077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722279570.312960806
	
	I0729 18:59:30.342084  152077 fix.go:216] guest clock: 1722279570.312960806
	I0729 18:59:30.342092  152077 fix.go:229] Guest: 2024-07-29 18:59:30.312960806 +0000 UTC Remote: 2024-07-29 18:59:30.234988552 +0000 UTC m=+230.685193458 (delta=77.972254ms)
	I0729 18:59:30.342134  152077 fix.go:200] guest clock delta is within tolerance: 77.972254ms
	I0729 18:59:30.342145  152077 start.go:83] releasing machines lock for "old-k8s-version-834964", held for 19.616668039s
	I0729 18:59:30.342179  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:30.342502  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:59:30.345489  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.345885  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:30.345917  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.346038  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:30.346564  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:30.346761  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .DriverName
	I0729 18:59:30.346848  152077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:59:30.346899  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:30.347008  152077 ssh_runner.go:195] Run: cat /version.json
	I0729 18:59:30.347035  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHHostname
	I0729 18:59:30.349621  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.349978  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.350056  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:30.350080  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.350214  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:30.350385  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:30.350466  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:30.350488  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:30.350563  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:30.350625  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHPort
	I0729 18:59:30.350737  152077 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:59:30.350811  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHKeyPath
	I0729 18:59:30.350955  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetSSHUsername
	I0729 18:59:30.351110  152077 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/old-k8s-version-834964/id_rsa Username:docker}
	I0729 18:59:30.458405  152077 ssh_runner.go:195] Run: systemctl --version
	I0729 18:59:30.465636  152077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:59:30.614302  152077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:59:30.621254  152077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:59:30.621341  152077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:59:30.639929  152077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:59:30.639951  152077 start.go:495] detecting cgroup driver to use...
	I0729 18:59:30.640014  152077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:59:30.660286  152077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:59:30.680212  152077 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:59:30.680287  152077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:59:30.700782  152077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:59:30.722050  152077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:59:30.848624  152077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:59:31.014541  152077 docker.go:233] disabling docker service ...
	I0729 18:59:31.014633  152077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:59:31.030560  152077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:59:31.043240  152077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:59:31.182489  152077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:59:31.338661  152077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:59:31.353489  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:59:31.372958  152077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 18:59:31.373031  152077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:31.384674  152077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:59:31.384743  152077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:31.397732  152077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:31.408481  152077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:59:31.418983  152077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:59:31.430095  152077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:59:31.440316  152077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:59:31.440376  152077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:59:31.454369  152077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:59:31.464109  152077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:59:31.602010  152077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:59:31.776788  152077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:59:31.776884  152077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:59:31.783376  152077 start.go:563] Will wait 60s for crictl version
	I0729 18:59:31.783440  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:31.788335  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:59:31.835043  152077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:59:31.835137  152077 ssh_runner.go:195] Run: crio --version
	I0729 18:59:31.867407  152077 ssh_runner.go:195] Run: crio --version
	I0729 18:59:31.906757  152077 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 18:59:31.908229  152077 main.go:141] libmachine: (old-k8s-version-834964) Calling .GetIP
	I0729 18:59:31.911323  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:31.911752  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d4:59", ip: ""} in network mk-old-k8s-version-834964: {Iface:virbr1 ExpiryTime:2024-07-29 19:59:21 +0000 UTC Type:0 Mac:52:54:00:60:d4:59 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:old-k8s-version-834964 Clientid:01:52:54:00:60:d4:59}
	I0729 18:59:31.911788  152077 main.go:141] libmachine: (old-k8s-version-834964) DBG | domain old-k8s-version-834964 has defined IP address 192.168.61.89 and MAC address 52:54:00:60:d4:59 in network mk-old-k8s-version-834964
	I0729 18:59:31.912046  152077 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 18:59:31.916244  152077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:59:31.932961  152077 kubeadm.go:883] updating cluster {Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:59:31.933091  152077 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:59:31.933152  152077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:59:31.994345  152077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:59:31.994433  152077 ssh_runner.go:195] Run: which lz4
	I0729 18:59:31.999099  152077 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 18:59:32.003996  152077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:59:32.004036  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 18:59:33.668954  152077 crio.go:462] duration metric: took 1.669904838s to copy over tarball
	I0729 18:59:33.669039  152077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:59:36.583975  152077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914883435s)
	I0729 18:59:36.584005  152077 crio.go:469] duration metric: took 2.915018011s to extract the tarball
	I0729 18:59:36.584016  152077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:59:36.631515  152077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:59:36.667867  152077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 18:59:36.667896  152077 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 18:59:36.667964  152077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:36.668006  152077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:59:36.668011  152077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:59:36.668026  152077 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 18:59:36.667965  152077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:59:36.668009  152077 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:59:36.668080  152077 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 18:59:36.668040  152077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:59:36.669854  152077 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 18:59:36.669863  152077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:59:36.670066  152077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:59:36.670165  152077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:59:36.670165  152077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 18:59:36.670221  152077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:36.670243  152077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:59:36.670165  152077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:59:36.840898  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 18:59:36.843825  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:59:36.851242  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 18:59:36.856440  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:59:36.868504  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:59:36.889795  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 18:59:36.897786  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:59:36.948872  152077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 18:59:36.948919  152077 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 18:59:36.948933  152077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:59:36.948953  152077 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 18:59:36.948993  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:36.948993  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:36.982981  152077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:59:36.983833  152077 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 18:59:36.983868  152077 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 18:59:36.983903  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:37.051531  152077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 18:59:37.051573  152077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:59:37.051626  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:37.052794  152077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 18:59:37.052836  152077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:59:37.052894  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:37.052891  152077 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 18:59:37.052972  152077 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 18:59:37.052994  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:37.055958  152077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 18:59:37.055993  152077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:59:37.056027  152077 ssh_runner.go:195] Run: which crictl
	I0729 18:59:37.056053  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 18:59:37.056102  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 18:59:37.207598  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 18:59:37.207636  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 18:59:37.207647  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 18:59:37.207700  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 18:59:37.207790  152077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 18:59:37.207816  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 18:59:37.207918  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 18:59:37.321353  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 18:59:37.323936  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 18:59:37.330697  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 18:59:37.330788  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 18:59:37.330848  152077 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 18:59:37.330901  152077 cache_images.go:92] duration metric: took 662.990743ms to LoadCachedImages
	W0729 18:59:37.330994  152077 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19339-88081/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0729 18:59:37.331012  152077 kubeadm.go:934] updating node { 192.168.61.89 8443 v1.20.0 crio true true} ...
	I0729 18:59:37.331174  152077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-834964 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:59:37.331244  152077 ssh_runner.go:195] Run: crio config
	I0729 18:59:37.379781  152077 cni.go:84] Creating CNI manager for ""
	I0729 18:59:37.379805  152077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:59:37.379821  152077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:59:37.379849  152077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-834964 NodeName:old-k8s-version-834964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 18:59:37.380041  152077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-834964"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:59:37.380121  152077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 18:59:37.390185  152077 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:59:37.390247  152077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:59:37.401455  152077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 18:59:37.419736  152077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:59:37.438017  152077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 18:59:37.457881  152077 ssh_runner.go:195] Run: grep 192.168.61.89	control-plane.minikube.internal$ /etc/hosts
	I0729 18:59:37.461878  152077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:59:37.475477  152077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:59:37.601386  152077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:59:37.630282  152077 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964 for IP: 192.168.61.89
	I0729 18:59:37.630309  152077 certs.go:194] generating shared ca certs ...
	I0729 18:59:37.630331  152077 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:59:37.630517  152077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 18:59:37.630574  152077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 18:59:37.630587  152077 certs.go:256] generating profile certs ...
	I0729 18:59:37.630717  152077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.key
	I0729 18:59:37.630789  152077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.key.34fbf854
	I0729 18:59:37.630855  152077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.key
	I0729 18:59:37.630995  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 18:59:37.631039  152077 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 18:59:37.631049  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 18:59:37.631077  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 18:59:37.631109  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:59:37.631141  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 18:59:37.631179  152077 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 18:59:37.631894  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:59:37.670793  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 18:59:37.698962  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:59:37.723732  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 18:59:37.752005  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 18:59:37.791334  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:59:37.830038  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:59:37.860764  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:59:37.900015  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:59:37.924659  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 18:59:37.950049  152077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 18:59:37.974698  152077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:59:37.991903  152077 ssh_runner.go:195] Run: openssl version
	I0729 18:59:37.997823  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:59:38.009021  152077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:59:38.013905  152077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:59:38.014034  152077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:59:38.020663  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:59:38.032489  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 18:59:38.043992  152077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 18:59:38.050676  152077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 18:59:38.050753  152077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 18:59:38.056989  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 18:59:38.068418  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 18:59:38.080303  152077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 18:59:38.085665  152077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 18:59:38.085736  152077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 18:59:38.091430  152077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:59:38.105136  152077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:59:38.109647  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:59:38.115807  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:59:38.121672  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:59:38.128080  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:59:38.134195  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:59:38.140190  152077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:59:38.146051  152077 kubeadm.go:392] StartCluster: {Name:old-k8s-version-834964 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-834964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:59:38.146162  152077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:59:38.146213  152077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:59:38.182889  152077 cri.go:89] found id: ""
	I0729 18:59:38.182989  152077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:59:38.193169  152077 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 18:59:38.193191  152077 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 18:59:38.193252  152077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 18:59:38.202493  152077 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:59:38.203291  152077 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-834964" does not appear in /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:59:38.203782  152077 kubeconfig.go:62] /home/jenkins/minikube-integration/19339-88081/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-834964" cluster setting kubeconfig missing "old-k8s-version-834964" context setting]
	I0729 18:59:38.204438  152077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:59:38.230408  152077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 18:59:38.243228  152077 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.89
	I0729 18:59:38.243262  152077 kubeadm.go:1160] stopping kube-system containers ...
	I0729 18:59:38.243276  152077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 18:59:38.243335  152077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:59:38.279296  152077 cri.go:89] found id: ""
	I0729 18:59:38.279380  152077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 18:59:38.296415  152077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:59:38.308152  152077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:59:38.308174  152077 kubeadm.go:157] found existing configuration files:
	
	I0729 18:59:38.308225  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:59:38.317135  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:59:38.317194  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:59:38.326564  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:59:38.336270  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:59:38.336337  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:59:38.345342  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:59:38.354548  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:59:38.354605  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:59:38.364166  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:59:38.373484  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:59:38.373533  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:59:38.383259  152077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:59:38.393125  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:38.532442  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:39.309448  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:39.560692  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:39.677689  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 18:59:39.773200  152077 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:59:39.773302  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:40.273962  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:40.773384  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:41.274085  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:41.773667  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:42.273638  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:42.774096  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:43.273549  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:43.773652  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:44.274085  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:44.773401  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:45.274278  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:45.773998  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:46.273669  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:46.773390  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:47.273729  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:47.773855  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:48.273869  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:48.773703  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:49.273532  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:49.774260  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:50.273544  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:50.774284  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:51.274389  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:51.774063  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:52.274103  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:52.774063  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:53.274140  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:53.773533  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:54.274045  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:54.774107  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:55.274068  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:55.773381  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:56.274102  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:56.773461  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:57.274039  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:57.774105  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:58.274395  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:58.774088  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:59.273822  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:59:59.774344  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:00.274074  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:00.773606  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:01.273454  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:01.773551  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:02.273747  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:02.773849  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:03.273732  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:03.773484  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:04.274361  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:04.773330  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:05.274258  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:05.773922  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:06.273449  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:06.774301  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:07.274401  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:07.773732  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:08.274173  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:08.773487  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:09.273473  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:09.773708  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:10.274054  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:10.774168  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:11.274093  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:11.774054  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:12.274363  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:12.774120  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:13.274081  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:13.773555  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:14.274061  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:14.773600  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:15.274094  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:15.774239  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:16.273651  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:16.773467  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:17.273714  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:17.773832  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:18.273382  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:18.773798  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:19.273832  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:19.773386  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:20.274067  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:20.774073  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:21.274066  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:21.773468  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:22.274072  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:22.773775  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:23.274078  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:23.774074  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:24.273444  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:24.774273  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:25.273450  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:25.773595  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:26.273427  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:26.773353  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:27.274332  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:27.773884  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:28.273365  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:28.774166  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:29.273960  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:29.773369  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:30.273412  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:30.773846  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:31.274110  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:31.773869  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:32.273833  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:32.773807  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:33.274079  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:33.773718  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:34.274389  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:34.774252  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:35.273526  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:35.774031  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:36.273954  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:36.773765  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:37.273786  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:37.774233  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:38.273605  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:38.773655  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:39.274064  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:39.773416  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:39.773516  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:39.814400  152077 cri.go:89] found id: ""
	I0729 19:00:39.814426  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.814435  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:39.814441  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:39.814495  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:39.850437  152077 cri.go:89] found id: ""
	I0729 19:00:39.850466  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.850478  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:39.850486  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:39.850550  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:39.886841  152077 cri.go:89] found id: ""
	I0729 19:00:39.886877  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.886889  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:39.886898  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:39.886962  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:39.921450  152077 cri.go:89] found id: ""
	I0729 19:00:39.921483  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.921498  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:39.921508  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:39.921574  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:39.959364  152077 cri.go:89] found id: ""
	I0729 19:00:39.959390  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.959398  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:39.959404  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:39.959461  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:39.995074  152077 cri.go:89] found id: ""
	I0729 19:00:39.995101  152077 logs.go:276] 0 containers: []
	W0729 19:00:39.995112  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:39.995121  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:39.995185  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:40.033101  152077 cri.go:89] found id: ""
	I0729 19:00:40.033131  152077 logs.go:276] 0 containers: []
	W0729 19:00:40.033146  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:40.033154  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:40.033217  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:40.069273  152077 cri.go:89] found id: ""
	I0729 19:00:40.069301  152077 logs.go:276] 0 containers: []
	W0729 19:00:40.069311  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:40.069326  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:40.069344  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:40.121473  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:40.121511  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:40.136267  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:40.136300  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:40.255325  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:40.255347  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:40.255365  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:40.322460  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:40.322497  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:42.862734  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:42.876011  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:42.876075  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:42.915807  152077 cri.go:89] found id: ""
	I0729 19:00:42.915836  152077 logs.go:276] 0 containers: []
	W0729 19:00:42.915845  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:42.915856  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:42.915916  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:42.961500  152077 cri.go:89] found id: ""
	I0729 19:00:42.961535  152077 logs.go:276] 0 containers: []
	W0729 19:00:42.961546  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:42.961553  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:42.961617  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:43.006788  152077 cri.go:89] found id: ""
	I0729 19:00:43.006831  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.006843  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:43.006852  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:43.006909  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:43.054235  152077 cri.go:89] found id: ""
	I0729 19:00:43.054266  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.054277  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:43.054285  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:43.054347  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:43.093134  152077 cri.go:89] found id: ""
	I0729 19:00:43.093161  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.093170  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:43.093176  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:43.093225  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:43.128632  152077 cri.go:89] found id: ""
	I0729 19:00:43.128661  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.128670  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:43.128676  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:43.128735  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:43.164470  152077 cri.go:89] found id: ""
	I0729 19:00:43.164495  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.164503  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:43.164509  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:43.164565  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:43.198401  152077 cri.go:89] found id: ""
	I0729 19:00:43.198433  152077 logs.go:276] 0 containers: []
	W0729 19:00:43.198444  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:43.198457  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:43.198474  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:43.211431  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:43.211456  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:43.298317  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:43.298346  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:43.298367  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:43.372987  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:43.373023  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:43.411907  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:43.411935  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:45.964405  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:45.979422  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:45.979490  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:46.019631  152077 cri.go:89] found id: ""
	I0729 19:00:46.019658  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.019666  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:46.019672  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:46.019722  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:46.060112  152077 cri.go:89] found id: ""
	I0729 19:00:46.060141  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.060149  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:46.060155  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:46.060222  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:46.095008  152077 cri.go:89] found id: ""
	I0729 19:00:46.095036  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.095046  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:46.095054  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:46.095123  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:46.136824  152077 cri.go:89] found id: ""
	I0729 19:00:46.136850  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.136874  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:46.136883  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:46.136944  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:46.175572  152077 cri.go:89] found id: ""
	I0729 19:00:46.175597  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.175606  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:46.175612  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:46.175662  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:46.212359  152077 cri.go:89] found id: ""
	I0729 19:00:46.212394  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.212409  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:46.212418  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:46.212482  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:46.250722  152077 cri.go:89] found id: ""
	I0729 19:00:46.250757  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.250768  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:46.250776  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:46.250846  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:46.284967  152077 cri.go:89] found id: ""
	I0729 19:00:46.284992  152077 logs.go:276] 0 containers: []
	W0729 19:00:46.285006  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:46.285015  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:46.285027  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:46.337522  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:46.337553  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:46.350965  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:46.350992  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:46.423899  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:46.423924  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:46.423947  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:46.500612  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:46.500651  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:49.039471  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:49.054210  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:49.054278  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:49.094352  152077 cri.go:89] found id: ""
	I0729 19:00:49.094377  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.094385  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:49.094393  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:49.094450  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:49.134527  152077 cri.go:89] found id: ""
	I0729 19:00:49.134558  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.134569  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:49.134577  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:49.134646  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:49.172752  152077 cri.go:89] found id: ""
	I0729 19:00:49.172783  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.172797  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:49.172805  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:49.172900  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:49.206900  152077 cri.go:89] found id: ""
	I0729 19:00:49.206923  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.206931  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:49.206937  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:49.206998  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:49.241708  152077 cri.go:89] found id: ""
	I0729 19:00:49.241736  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.241745  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:49.241751  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:49.241803  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:49.279727  152077 cri.go:89] found id: ""
	I0729 19:00:49.279757  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.279768  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:49.279776  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:49.279842  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:49.313695  152077 cri.go:89] found id: ""
	I0729 19:00:49.313722  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.313731  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:49.313737  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:49.313795  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:49.351878  152077 cri.go:89] found id: ""
	I0729 19:00:49.351910  152077 logs.go:276] 0 containers: []
	W0729 19:00:49.351920  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:49.351932  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:49.351946  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:49.364944  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:49.364971  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:49.433729  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:49.433756  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:49.433771  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:49.513965  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:49.514002  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:49.555427  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:49.555459  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:52.108824  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:52.122490  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:52.122568  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:52.158170  152077 cri.go:89] found id: ""
	I0729 19:00:52.158202  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.158214  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:52.158222  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:52.158288  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:52.192916  152077 cri.go:89] found id: ""
	I0729 19:00:52.192947  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.192959  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:52.192967  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:52.193040  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:52.225783  152077 cri.go:89] found id: ""
	I0729 19:00:52.225815  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.225826  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:52.225834  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:52.225899  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:52.265368  152077 cri.go:89] found id: ""
	I0729 19:00:52.265395  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.265406  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:52.265413  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:52.265473  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:52.299857  152077 cri.go:89] found id: ""
	I0729 19:00:52.299904  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.299915  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:52.299923  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:52.299992  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:52.338117  152077 cri.go:89] found id: ""
	I0729 19:00:52.338143  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.338154  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:52.338162  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:52.338222  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:52.372237  152077 cri.go:89] found id: ""
	I0729 19:00:52.372261  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.372269  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:52.372275  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:52.372324  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:52.409303  152077 cri.go:89] found id: ""
	I0729 19:00:52.409329  152077 logs.go:276] 0 containers: []
	W0729 19:00:52.409337  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:52.409347  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:52.409360  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:52.460746  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:52.460777  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:52.474486  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:52.474515  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:52.553416  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:52.553438  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:52.553455  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:52.638968  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:52.639015  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:55.179242  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:55.192550  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:55.192610  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:55.228887  152077 cri.go:89] found id: ""
	I0729 19:00:55.228917  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.228925  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:55.228930  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:55.228989  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:55.266646  152077 cri.go:89] found id: ""
	I0729 19:00:55.266679  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.266690  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:55.266697  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:55.266758  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:55.307050  152077 cri.go:89] found id: ""
	I0729 19:00:55.307090  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.307102  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:55.307110  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:55.307172  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:55.343778  152077 cri.go:89] found id: ""
	I0729 19:00:55.343806  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.343817  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:55.343824  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:55.343892  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:55.378481  152077 cri.go:89] found id: ""
	I0729 19:00:55.378512  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.378524  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:55.378532  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:55.378593  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:55.412401  152077 cri.go:89] found id: ""
	I0729 19:00:55.412432  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.412445  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:55.412452  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:55.412516  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:55.447365  152077 cri.go:89] found id: ""
	I0729 19:00:55.447392  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.447400  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:55.447406  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:55.447452  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:55.482482  152077 cri.go:89] found id: ""
	I0729 19:00:55.482506  152077 logs.go:276] 0 containers: []
	W0729 19:00:55.482515  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:55.482526  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:55.482541  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:55.552333  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:00:55.552361  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:55.552379  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:55.632588  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:55.632626  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:55.674827  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:55.674865  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:55.728009  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:55.728054  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:58.243181  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:00:58.256700  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:00:58.256762  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:00:58.291952  152077 cri.go:89] found id: ""
	I0729 19:00:58.291979  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.291989  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:00:58.291995  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:00:58.292055  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:00:58.325824  152077 cri.go:89] found id: ""
	I0729 19:00:58.325858  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.325869  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:00:58.325877  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:00:58.325934  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:00:58.359100  152077 cri.go:89] found id: ""
	I0729 19:00:58.359130  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.359142  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:00:58.359149  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:00:58.359236  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:00:58.390409  152077 cri.go:89] found id: ""
	I0729 19:00:58.390442  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.390453  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:00:58.390462  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:00:58.390525  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:00:58.426976  152077 cri.go:89] found id: ""
	I0729 19:00:58.427004  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.427023  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:00:58.427031  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:00:58.427091  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:00:58.460492  152077 cri.go:89] found id: ""
	I0729 19:00:58.460528  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.460537  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:00:58.460545  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:00:58.460608  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:00:58.495894  152077 cri.go:89] found id: ""
	I0729 19:00:58.495930  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.495942  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:00:58.495950  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:00:58.496030  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:00:58.530710  152077 cri.go:89] found id: ""
	I0729 19:00:58.530739  152077 logs.go:276] 0 containers: []
	W0729 19:00:58.530750  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:00:58.530762  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:00:58.530779  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:00:58.607469  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:00:58.607515  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:00:58.646982  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:00:58.647016  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:00:58.698304  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:00:58.698356  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:00:58.713370  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:00:58.713398  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:00:58.786858  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:01.287427  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:01.301239  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:01.301316  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:01.337329  152077 cri.go:89] found id: ""
	I0729 19:01:01.337357  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.337368  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:01.337376  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:01.337440  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:01.375796  152077 cri.go:89] found id: ""
	I0729 19:01:01.375828  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.375836  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:01.375843  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:01.375904  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:01.408560  152077 cri.go:89] found id: ""
	I0729 19:01:01.408585  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.408594  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:01.408600  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:01.408658  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:01.443797  152077 cri.go:89] found id: ""
	I0729 19:01:01.443833  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.443841  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:01.443849  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:01.443909  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:01.478900  152077 cri.go:89] found id: ""
	I0729 19:01:01.478928  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.478941  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:01.478948  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:01.479014  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:01.512370  152077 cri.go:89] found id: ""
	I0729 19:01:01.512398  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.512407  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:01.512413  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:01.512463  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:01.546996  152077 cri.go:89] found id: ""
	I0729 19:01:01.547031  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.547042  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:01.547050  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:01.547113  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:01.581135  152077 cri.go:89] found id: ""
	I0729 19:01:01.581161  152077 logs.go:276] 0 containers: []
	W0729 19:01:01.581169  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:01.581178  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:01.581194  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:01.595012  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:01.595042  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:01.670013  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:01.670034  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:01.670047  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:01.746304  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:01.746342  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:01.788085  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:01.788122  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:04.339966  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:04.353377  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:04.353447  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:04.386653  152077 cri.go:89] found id: ""
	I0729 19:01:04.386680  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.386691  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:04.386699  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:04.386763  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:04.420317  152077 cri.go:89] found id: ""
	I0729 19:01:04.420350  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.420360  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:04.420369  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:04.420436  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:04.454461  152077 cri.go:89] found id: ""
	I0729 19:01:04.454485  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.454495  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:04.454502  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:04.454562  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:04.487377  152077 cri.go:89] found id: ""
	I0729 19:01:04.487403  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.487415  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:04.487423  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:04.487489  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:04.520888  152077 cri.go:89] found id: ""
	I0729 19:01:04.520914  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.520924  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:04.520930  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:04.520982  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:04.554321  152077 cri.go:89] found id: ""
	I0729 19:01:04.554345  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.554354  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:04.554361  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:04.554427  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:04.593894  152077 cri.go:89] found id: ""
	I0729 19:01:04.593926  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.593937  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:04.593945  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:04.594013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:04.627113  152077 cri.go:89] found id: ""
	I0729 19:01:04.627140  152077 logs.go:276] 0 containers: []
	W0729 19:01:04.627148  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:04.627158  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:04.627170  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:04.678099  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:04.678134  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:04.692096  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:04.692125  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:04.763388  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:04.763414  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:04.763432  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:04.842745  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:04.842774  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:07.384259  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:07.397933  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:07.398000  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:07.443262  152077 cri.go:89] found id: ""
	I0729 19:01:07.443289  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.443300  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:07.443308  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:07.443365  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:07.477719  152077 cri.go:89] found id: ""
	I0729 19:01:07.477749  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.477764  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:07.477771  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:07.477835  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:07.512037  152077 cri.go:89] found id: ""
	I0729 19:01:07.512062  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.512071  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:07.512077  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:07.512134  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:07.554189  152077 cri.go:89] found id: ""
	I0729 19:01:07.554223  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.554234  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:07.554242  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:07.554307  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:07.588508  152077 cri.go:89] found id: ""
	I0729 19:01:07.588540  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.588551  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:07.588559  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:07.588631  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:07.622139  152077 cri.go:89] found id: ""
	I0729 19:01:07.622164  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.622176  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:07.622184  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:07.622254  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:07.656573  152077 cri.go:89] found id: ""
	I0729 19:01:07.656607  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.656619  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:07.656627  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:07.656695  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:07.694720  152077 cri.go:89] found id: ""
	I0729 19:01:07.694748  152077 logs.go:276] 0 containers: []
	W0729 19:01:07.694759  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:07.694770  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:07.694787  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:07.762272  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:07.762294  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:07.762311  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:07.843424  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:07.843456  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:07.880999  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:07.881035  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:07.932111  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:07.932143  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:10.446339  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:10.459790  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:10.459868  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:10.497683  152077 cri.go:89] found id: ""
	I0729 19:01:10.497710  152077 logs.go:276] 0 containers: []
	W0729 19:01:10.497719  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:10.497724  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:10.497785  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:10.531004  152077 cri.go:89] found id: ""
	I0729 19:01:10.531028  152077 logs.go:276] 0 containers: []
	W0729 19:01:10.531037  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:10.531046  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:10.531106  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:10.567777  152077 cri.go:89] found id: ""
	I0729 19:01:10.567806  152077 logs.go:276] 0 containers: []
	W0729 19:01:10.567817  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:10.567828  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:10.567897  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:10.602032  152077 cri.go:89] found id: ""
	I0729 19:01:10.602058  152077 logs.go:276] 0 containers: []
	W0729 19:01:10.602068  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:10.602075  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:10.602135  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:10.636349  152077 cri.go:89] found id: ""
	I0729 19:01:10.636380  152077 logs.go:276] 0 containers: []
	W0729 19:01:10.636391  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:10.636399  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:10.636461  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:10.670751  152077 cri.go:89] found id: ""
	I0729 19:01:10.670785  152077 logs.go:276] 0 containers: []
	W0729 19:01:10.670795  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:10.670809  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:10.670879  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:10.705196  152077 cri.go:89] found id: ""
	I0729 19:01:10.705227  152077 logs.go:276] 0 containers: []
	W0729 19:01:10.705241  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:10.705249  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:10.705310  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:10.743818  152077 cri.go:89] found id: ""
	I0729 19:01:10.743852  152077 logs.go:276] 0 containers: []
	W0729 19:01:10.743864  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:10.743883  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:10.743900  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:10.756993  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:10.757029  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:10.825151  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:10.825177  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:10.825194  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:10.907686  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:10.907728  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:10.947670  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:10.947704  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:13.499434  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:13.512313  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:13.512386  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:13.547396  152077 cri.go:89] found id: ""
	I0729 19:01:13.547426  152077 logs.go:276] 0 containers: []
	W0729 19:01:13.547438  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:13.547446  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:13.547510  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:13.591627  152077 cri.go:89] found id: ""
	I0729 19:01:13.591656  152077 logs.go:276] 0 containers: []
	W0729 19:01:13.591665  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:13.591670  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:13.591734  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:13.633779  152077 cri.go:89] found id: ""
	I0729 19:01:13.633818  152077 logs.go:276] 0 containers: []
	W0729 19:01:13.633829  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:13.633837  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:13.633906  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:13.673893  152077 cri.go:89] found id: ""
	I0729 19:01:13.673917  152077 logs.go:276] 0 containers: []
	W0729 19:01:13.673926  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:13.673932  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:13.673993  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:13.707725  152077 cri.go:89] found id: ""
	I0729 19:01:13.707753  152077 logs.go:276] 0 containers: []
	W0729 19:01:13.707763  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:13.707772  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:13.707832  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:13.747771  152077 cri.go:89] found id: ""
	I0729 19:01:13.747799  152077 logs.go:276] 0 containers: []
	W0729 19:01:13.747815  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:13.747825  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:13.747887  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:13.785726  152077 cri.go:89] found id: ""
	I0729 19:01:13.785747  152077 logs.go:276] 0 containers: []
	W0729 19:01:13.785754  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:13.785760  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:13.785805  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:13.826770  152077 cri.go:89] found id: ""
	I0729 19:01:13.826793  152077 logs.go:276] 0 containers: []
	W0729 19:01:13.826800  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:13.826809  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:13.826821  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:13.884887  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:13.884918  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:13.899328  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:13.899350  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:13.973503  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:13.973525  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:13.973539  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:14.056852  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:14.056899  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:16.600780  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:16.616159  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:16.616229  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:16.654165  152077 cri.go:89] found id: ""
	I0729 19:01:16.654188  152077 logs.go:276] 0 containers: []
	W0729 19:01:16.654200  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:16.654206  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:16.654252  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:16.693673  152077 cri.go:89] found id: ""
	I0729 19:01:16.693703  152077 logs.go:276] 0 containers: []
	W0729 19:01:16.693715  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:16.693722  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:16.693797  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:16.730286  152077 cri.go:89] found id: ""
	I0729 19:01:16.730312  152077 logs.go:276] 0 containers: []
	W0729 19:01:16.730320  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:16.730326  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:16.730389  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:16.763508  152077 cri.go:89] found id: ""
	I0729 19:01:16.763538  152077 logs.go:276] 0 containers: []
	W0729 19:01:16.763548  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:16.763556  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:16.763632  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:16.829574  152077 cri.go:89] found id: ""
	I0729 19:01:16.829603  152077 logs.go:276] 0 containers: []
	W0729 19:01:16.829615  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:16.829623  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:16.829701  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:16.867722  152077 cri.go:89] found id: ""
	I0729 19:01:16.867751  152077 logs.go:276] 0 containers: []
	W0729 19:01:16.867762  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:16.867771  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:16.867859  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:16.905536  152077 cri.go:89] found id: ""
	I0729 19:01:16.905576  152077 logs.go:276] 0 containers: []
	W0729 19:01:16.905586  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:16.905595  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:16.905663  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:16.944434  152077 cri.go:89] found id: ""
	I0729 19:01:16.944459  152077 logs.go:276] 0 containers: []
	W0729 19:01:16.944469  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:16.944481  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:16.944496  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:16.998926  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:16.998963  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:17.013519  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:17.013550  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:17.091509  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:17.091535  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:17.091549  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:17.173000  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:17.173043  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:19.720368  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:19.737530  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:19.737602  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:19.777133  152077 cri.go:89] found id: ""
	I0729 19:01:19.777165  152077 logs.go:276] 0 containers: []
	W0729 19:01:19.777176  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:19.777184  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:19.777248  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:19.813408  152077 cri.go:89] found id: ""
	I0729 19:01:19.813437  152077 logs.go:276] 0 containers: []
	W0729 19:01:19.813448  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:19.813456  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:19.813527  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:19.850307  152077 cri.go:89] found id: ""
	I0729 19:01:19.850334  152077 logs.go:276] 0 containers: []
	W0729 19:01:19.850343  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:19.850351  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:19.850409  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:19.884974  152077 cri.go:89] found id: ""
	I0729 19:01:19.885018  152077 logs.go:276] 0 containers: []
	W0729 19:01:19.885029  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:19.885037  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:19.885104  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:19.925900  152077 cri.go:89] found id: ""
	I0729 19:01:19.925930  152077 logs.go:276] 0 containers: []
	W0729 19:01:19.925942  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:19.925950  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:19.926003  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:19.959978  152077 cri.go:89] found id: ""
	I0729 19:01:19.960007  152077 logs.go:276] 0 containers: []
	W0729 19:01:19.960019  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:19.960027  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:19.960089  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:19.995879  152077 cri.go:89] found id: ""
	I0729 19:01:19.995911  152077 logs.go:276] 0 containers: []
	W0729 19:01:19.995923  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:19.995965  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:19.996029  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:20.043899  152077 cri.go:89] found id: ""
	I0729 19:01:20.043926  152077 logs.go:276] 0 containers: []
	W0729 19:01:20.043937  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:20.043951  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:20.043972  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:20.099243  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:20.099276  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:20.112639  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:20.112674  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:20.189277  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:20.189296  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:20.189310  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:20.275076  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:20.275120  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:22.815386  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:22.828963  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:22.829042  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:22.865976  152077 cri.go:89] found id: ""
	I0729 19:01:22.866010  152077 logs.go:276] 0 containers: []
	W0729 19:01:22.866022  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:22.866031  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:22.866088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:22.906873  152077 cri.go:89] found id: ""
	I0729 19:01:22.906902  152077 logs.go:276] 0 containers: []
	W0729 19:01:22.906913  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:22.906920  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:22.907004  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:22.943023  152077 cri.go:89] found id: ""
	I0729 19:01:22.943052  152077 logs.go:276] 0 containers: []
	W0729 19:01:22.943062  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:22.943070  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:22.943153  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:22.983464  152077 cri.go:89] found id: ""
	I0729 19:01:22.983496  152077 logs.go:276] 0 containers: []
	W0729 19:01:22.983507  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:22.983516  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:22.983582  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:23.020675  152077 cri.go:89] found id: ""
	I0729 19:01:23.020702  152077 logs.go:276] 0 containers: []
	W0729 19:01:23.020710  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:23.020716  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:23.020781  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:23.055210  152077 cri.go:89] found id: ""
	I0729 19:01:23.055241  152077 logs.go:276] 0 containers: []
	W0729 19:01:23.055252  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:23.055259  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:23.055313  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:23.091920  152077 cri.go:89] found id: ""
	I0729 19:01:23.091960  152077 logs.go:276] 0 containers: []
	W0729 19:01:23.091974  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:23.091982  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:23.092056  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:23.133918  152077 cri.go:89] found id: ""
	I0729 19:01:23.133940  152077 logs.go:276] 0 containers: []
	W0729 19:01:23.133947  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:23.133957  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:23.133972  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:23.197083  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:23.197122  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:23.216186  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:23.216215  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:23.322307  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:23.322340  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:23.322357  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:23.426178  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:23.426223  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:25.966108  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:25.980357  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:25.980438  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:26.023609  152077 cri.go:89] found id: ""
	I0729 19:01:26.023664  152077 logs.go:276] 0 containers: []
	W0729 19:01:26.023677  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:26.023690  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:26.023763  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:26.066290  152077 cri.go:89] found id: ""
	I0729 19:01:26.066325  152077 logs.go:276] 0 containers: []
	W0729 19:01:26.066336  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:26.066344  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:26.066412  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:26.103236  152077 cri.go:89] found id: ""
	I0729 19:01:26.103265  152077 logs.go:276] 0 containers: []
	W0729 19:01:26.103275  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:26.103284  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:26.103346  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:26.139403  152077 cri.go:89] found id: ""
	I0729 19:01:26.139436  152077 logs.go:276] 0 containers: []
	W0729 19:01:26.139448  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:26.139456  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:26.139525  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:26.180758  152077 cri.go:89] found id: ""
	I0729 19:01:26.180785  152077 logs.go:276] 0 containers: []
	W0729 19:01:26.180796  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:26.180803  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:26.180882  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:26.227237  152077 cri.go:89] found id: ""
	I0729 19:01:26.227276  152077 logs.go:276] 0 containers: []
	W0729 19:01:26.227289  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:26.227297  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:26.227361  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:26.262765  152077 cri.go:89] found id: ""
	I0729 19:01:26.262800  152077 logs.go:276] 0 containers: []
	W0729 19:01:26.262808  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:26.262816  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:26.262887  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:26.303988  152077 cri.go:89] found id: ""
	I0729 19:01:26.304024  152077 logs.go:276] 0 containers: []
	W0729 19:01:26.304035  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:26.304057  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:26.304072  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:26.389250  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:26.389293  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:26.439932  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:26.439962  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:26.498725  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:26.498764  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:26.513898  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:26.513938  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:26.591722  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:29.092841  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:29.108361  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:29.108460  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:29.159440  152077 cri.go:89] found id: ""
	I0729 19:01:29.159469  152077 logs.go:276] 0 containers: []
	W0729 19:01:29.159481  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:29.159489  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:29.159553  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:29.222836  152077 cri.go:89] found id: ""
	I0729 19:01:29.222867  152077 logs.go:276] 0 containers: []
	W0729 19:01:29.222881  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:29.222889  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:29.222956  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:29.261464  152077 cri.go:89] found id: ""
	I0729 19:01:29.261491  152077 logs.go:276] 0 containers: []
	W0729 19:01:29.261503  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:29.261511  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:29.261569  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:29.293679  152077 cri.go:89] found id: ""
	I0729 19:01:29.293704  152077 logs.go:276] 0 containers: []
	W0729 19:01:29.293712  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:29.293718  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:29.293787  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:29.327342  152077 cri.go:89] found id: ""
	I0729 19:01:29.327369  152077 logs.go:276] 0 containers: []
	W0729 19:01:29.327379  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:29.327388  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:29.327446  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:29.361714  152077 cri.go:89] found id: ""
	I0729 19:01:29.361743  152077 logs.go:276] 0 containers: []
	W0729 19:01:29.361754  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:29.361762  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:29.361823  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:29.394930  152077 cri.go:89] found id: ""
	I0729 19:01:29.394960  152077 logs.go:276] 0 containers: []
	W0729 19:01:29.394970  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:29.394976  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:29.395036  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:29.429024  152077 cri.go:89] found id: ""
	I0729 19:01:29.429048  152077 logs.go:276] 0 containers: []
	W0729 19:01:29.429056  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:29.429066  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:29.429078  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:29.509276  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:29.509312  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:29.556630  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:29.556658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:29.607533  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:29.607567  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:29.622033  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:29.622060  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:29.697322  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:32.197746  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:32.210954  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:32.211016  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:32.248988  152077 cri.go:89] found id: ""
	I0729 19:01:32.249020  152077 logs.go:276] 0 containers: []
	W0729 19:01:32.249029  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:32.249037  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:32.249100  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:32.283687  152077 cri.go:89] found id: ""
	I0729 19:01:32.283713  152077 logs.go:276] 0 containers: []
	W0729 19:01:32.283721  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:32.283731  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:32.283790  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:32.321967  152077 cri.go:89] found id: ""
	I0729 19:01:32.321997  152077 logs.go:276] 0 containers: []
	W0729 19:01:32.322008  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:32.322016  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:32.322078  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:32.363725  152077 cri.go:89] found id: ""
	I0729 19:01:32.363746  152077 logs.go:276] 0 containers: []
	W0729 19:01:32.363753  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:32.363759  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:32.363811  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:32.399486  152077 cri.go:89] found id: ""
	I0729 19:01:32.399515  152077 logs.go:276] 0 containers: []
	W0729 19:01:32.399526  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:32.399534  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:32.399599  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:32.436443  152077 cri.go:89] found id: ""
	I0729 19:01:32.436473  152077 logs.go:276] 0 containers: []
	W0729 19:01:32.436482  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:32.436491  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:32.436559  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:32.470544  152077 cri.go:89] found id: ""
	I0729 19:01:32.470572  152077 logs.go:276] 0 containers: []
	W0729 19:01:32.470583  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:32.470592  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:32.470658  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:32.508571  152077 cri.go:89] found id: ""
	I0729 19:01:32.508605  152077 logs.go:276] 0 containers: []
	W0729 19:01:32.508616  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:32.508628  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:32.508649  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:32.563019  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:32.563065  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:32.577307  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:32.577340  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:32.652128  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:32.652149  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:32.652165  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:32.731878  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:32.731913  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:35.273240  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:35.286533  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:35.286628  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:35.328704  152077 cri.go:89] found id: ""
	I0729 19:01:35.328737  152077 logs.go:276] 0 containers: []
	W0729 19:01:35.328749  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:35.328758  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:35.328821  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:35.362782  152077 cri.go:89] found id: ""
	I0729 19:01:35.362812  152077 logs.go:276] 0 containers: []
	W0729 19:01:35.362823  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:35.362831  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:35.362896  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:35.398813  152077 cri.go:89] found id: ""
	I0729 19:01:35.398857  152077 logs.go:276] 0 containers: []
	W0729 19:01:35.398870  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:35.398878  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:35.398948  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:35.437643  152077 cri.go:89] found id: ""
	I0729 19:01:35.437674  152077 logs.go:276] 0 containers: []
	W0729 19:01:35.437687  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:35.437694  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:35.437757  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:35.472792  152077 cri.go:89] found id: ""
	I0729 19:01:35.472819  152077 logs.go:276] 0 containers: []
	W0729 19:01:35.472831  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:35.472839  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:35.472924  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:35.508313  152077 cri.go:89] found id: ""
	I0729 19:01:35.508349  152077 logs.go:276] 0 containers: []
	W0729 19:01:35.508361  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:35.508370  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:35.508438  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:35.548054  152077 cri.go:89] found id: ""
	I0729 19:01:35.548084  152077 logs.go:276] 0 containers: []
	W0729 19:01:35.548093  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:35.548099  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:35.548153  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:35.582288  152077 cri.go:89] found id: ""
	I0729 19:01:35.582320  152077 logs.go:276] 0 containers: []
	W0729 19:01:35.582331  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:35.582343  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:35.582361  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:35.595160  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:35.595189  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:35.665518  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:35.665543  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:35.665559  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:35.748240  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:35.748275  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:35.793539  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:35.793574  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:38.348498  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:38.361577  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:38.361646  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:38.396897  152077 cri.go:89] found id: ""
	I0729 19:01:38.396926  152077 logs.go:276] 0 containers: []
	W0729 19:01:38.396938  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:38.396946  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:38.397001  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:38.429494  152077 cri.go:89] found id: ""
	I0729 19:01:38.429526  152077 logs.go:276] 0 containers: []
	W0729 19:01:38.429537  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:38.429554  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:38.429604  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:38.462810  152077 cri.go:89] found id: ""
	I0729 19:01:38.462837  152077 logs.go:276] 0 containers: []
	W0729 19:01:38.462848  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:38.462856  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:38.462921  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:38.495364  152077 cri.go:89] found id: ""
	I0729 19:01:38.495394  152077 logs.go:276] 0 containers: []
	W0729 19:01:38.495403  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:38.495409  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:38.495457  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:38.527722  152077 cri.go:89] found id: ""
	I0729 19:01:38.527751  152077 logs.go:276] 0 containers: []
	W0729 19:01:38.527762  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:38.527771  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:38.527835  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:38.564320  152077 cri.go:89] found id: ""
	I0729 19:01:38.564350  152077 logs.go:276] 0 containers: []
	W0729 19:01:38.564363  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:38.564371  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:38.564438  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:38.599953  152077 cri.go:89] found id: ""
	I0729 19:01:38.599978  152077 logs.go:276] 0 containers: []
	W0729 19:01:38.599986  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:38.599992  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:38.600055  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:38.639773  152077 cri.go:89] found id: ""
	I0729 19:01:38.639810  152077 logs.go:276] 0 containers: []
	W0729 19:01:38.639822  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:38.639835  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:38.639850  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:38.652538  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:38.652571  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:38.729801  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:38.729826  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:38.729840  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:38.807759  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:38.807797  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:38.847967  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:38.847993  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:41.401968  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:41.416380  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:41.416448  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:41.455452  152077 cri.go:89] found id: ""
	I0729 19:01:41.455482  152077 logs.go:276] 0 containers: []
	W0729 19:01:41.455493  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:41.455501  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:41.455564  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:41.495845  152077 cri.go:89] found id: ""
	I0729 19:01:41.495871  152077 logs.go:276] 0 containers: []
	W0729 19:01:41.495880  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:41.495887  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:41.495939  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:41.529945  152077 cri.go:89] found id: ""
	I0729 19:01:41.529970  152077 logs.go:276] 0 containers: []
	W0729 19:01:41.529978  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:41.529984  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:41.530036  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:41.564473  152077 cri.go:89] found id: ""
	I0729 19:01:41.564513  152077 logs.go:276] 0 containers: []
	W0729 19:01:41.564525  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:41.564534  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:41.564596  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:41.600348  152077 cri.go:89] found id: ""
	I0729 19:01:41.600383  152077 logs.go:276] 0 containers: []
	W0729 19:01:41.600396  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:41.600404  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:41.600467  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:41.638451  152077 cri.go:89] found id: ""
	I0729 19:01:41.638484  152077 logs.go:276] 0 containers: []
	W0729 19:01:41.638496  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:41.638504  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:41.638568  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:41.676900  152077 cri.go:89] found id: ""
	I0729 19:01:41.676939  152077 logs.go:276] 0 containers: []
	W0729 19:01:41.676950  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:41.676958  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:41.677030  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:41.711491  152077 cri.go:89] found id: ""
	I0729 19:01:41.711520  152077 logs.go:276] 0 containers: []
	W0729 19:01:41.711531  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:41.711550  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:41.711567  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:41.767841  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:41.767876  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:41.783386  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:41.783417  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:41.865491  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:41.865516  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:41.865532  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:41.950294  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:41.950326  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:44.492380  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:44.509969  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:44.510051  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:44.547907  152077 cri.go:89] found id: ""
	I0729 19:01:44.547940  152077 logs.go:276] 0 containers: []
	W0729 19:01:44.547953  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:44.547961  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:44.548030  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:44.588941  152077 cri.go:89] found id: ""
	I0729 19:01:44.588977  152077 logs.go:276] 0 containers: []
	W0729 19:01:44.588989  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:44.588999  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:44.589074  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:44.627188  152077 cri.go:89] found id: ""
	I0729 19:01:44.627222  152077 logs.go:276] 0 containers: []
	W0729 19:01:44.627234  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:44.627243  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:44.627311  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:44.670012  152077 cri.go:89] found id: ""
	I0729 19:01:44.670037  152077 logs.go:276] 0 containers: []
	W0729 19:01:44.670062  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:44.670071  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:44.670135  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:44.712832  152077 cri.go:89] found id: ""
	I0729 19:01:44.712878  152077 logs.go:276] 0 containers: []
	W0729 19:01:44.712891  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:44.712899  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:44.712963  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:44.759828  152077 cri.go:89] found id: ""
	I0729 19:01:44.759858  152077 logs.go:276] 0 containers: []
	W0729 19:01:44.759876  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:44.759885  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:44.759955  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:44.804899  152077 cri.go:89] found id: ""
	I0729 19:01:44.804934  152077 logs.go:276] 0 containers: []
	W0729 19:01:44.804947  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:44.804957  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:44.805036  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:44.847663  152077 cri.go:89] found id: ""
	I0729 19:01:44.847689  152077 logs.go:276] 0 containers: []
	W0729 19:01:44.847699  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:44.847711  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:44.847726  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:44.926566  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:44.926605  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:44.944083  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:44.944125  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:45.035755  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:45.035783  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:45.035805  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:45.130043  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:45.130085  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:47.676774  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:47.692032  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:47.692113  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:47.742426  152077 cri.go:89] found id: ""
	I0729 19:01:47.742451  152077 logs.go:276] 0 containers: []
	W0729 19:01:47.742460  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:47.742468  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:47.742530  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:47.785924  152077 cri.go:89] found id: ""
	I0729 19:01:47.785962  152077 logs.go:276] 0 containers: []
	W0729 19:01:47.785975  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:47.785982  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:47.786040  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:47.826303  152077 cri.go:89] found id: ""
	I0729 19:01:47.826339  152077 logs.go:276] 0 containers: []
	W0729 19:01:47.826352  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:47.826360  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:47.826435  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:47.871116  152077 cri.go:89] found id: ""
	I0729 19:01:47.871160  152077 logs.go:276] 0 containers: []
	W0729 19:01:47.871173  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:47.871182  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:47.871262  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:47.912123  152077 cri.go:89] found id: ""
	I0729 19:01:47.912147  152077 logs.go:276] 0 containers: []
	W0729 19:01:47.912156  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:47.912161  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:47.912216  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:47.948354  152077 cri.go:89] found id: ""
	I0729 19:01:47.948383  152077 logs.go:276] 0 containers: []
	W0729 19:01:47.948391  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:47.948398  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:47.948458  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:47.992017  152077 cri.go:89] found id: ""
	I0729 19:01:47.992046  152077 logs.go:276] 0 containers: []
	W0729 19:01:47.992056  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:47.992063  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:47.992123  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:48.034675  152077 cri.go:89] found id: ""
	I0729 19:01:48.034700  152077 logs.go:276] 0 containers: []
	W0729 19:01:48.034709  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:48.034719  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:48.034730  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:48.130420  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:48.130459  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:48.175668  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:48.175699  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:48.228336  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:48.228376  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:48.242717  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:48.242753  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:48.315002  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:50.815570  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:50.829042  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:50.829103  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:50.865294  152077 cri.go:89] found id: ""
	I0729 19:01:50.865320  152077 logs.go:276] 0 containers: []
	W0729 19:01:50.865329  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:50.865335  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:50.865388  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:50.903163  152077 cri.go:89] found id: ""
	I0729 19:01:50.903189  152077 logs.go:276] 0 containers: []
	W0729 19:01:50.903198  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:50.903204  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:50.903285  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:50.946489  152077 cri.go:89] found id: ""
	I0729 19:01:50.946521  152077 logs.go:276] 0 containers: []
	W0729 19:01:50.946531  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:50.946539  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:50.946604  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:50.982771  152077 cri.go:89] found id: ""
	I0729 19:01:50.982810  152077 logs.go:276] 0 containers: []
	W0729 19:01:50.982821  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:50.982829  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:50.982899  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:51.018170  152077 cri.go:89] found id: ""
	I0729 19:01:51.018213  152077 logs.go:276] 0 containers: []
	W0729 19:01:51.018226  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:51.018234  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:51.018297  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:51.052175  152077 cri.go:89] found id: ""
	I0729 19:01:51.052213  152077 logs.go:276] 0 containers: []
	W0729 19:01:51.052225  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:51.052234  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:51.052292  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:51.088927  152077 cri.go:89] found id: ""
	I0729 19:01:51.088954  152077 logs.go:276] 0 containers: []
	W0729 19:01:51.088962  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:51.088968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:51.089028  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:51.124826  152077 cri.go:89] found id: ""
	I0729 19:01:51.124865  152077 logs.go:276] 0 containers: []
	W0729 19:01:51.124877  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:51.124890  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:51.124907  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:51.139686  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:51.139720  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:51.211739  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:51.211763  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:51.211782  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:51.318763  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:51.318802  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:51.368929  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:51.368959  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:53.921446  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:53.936712  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:53.936794  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:53.975453  152077 cri.go:89] found id: ""
	I0729 19:01:53.975480  152077 logs.go:276] 0 containers: []
	W0729 19:01:53.975491  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:53.975498  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:53.975563  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:54.013426  152077 cri.go:89] found id: ""
	I0729 19:01:54.013456  152077 logs.go:276] 0 containers: []
	W0729 19:01:54.013467  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:54.013476  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:54.013539  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:54.052129  152077 cri.go:89] found id: ""
	I0729 19:01:54.052162  152077 logs.go:276] 0 containers: []
	W0729 19:01:54.052173  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:54.052181  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:54.052246  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:54.088096  152077 cri.go:89] found id: ""
	I0729 19:01:54.088133  152077 logs.go:276] 0 containers: []
	W0729 19:01:54.088145  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:54.088154  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:54.088226  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:54.125560  152077 cri.go:89] found id: ""
	I0729 19:01:54.125596  152077 logs.go:276] 0 containers: []
	W0729 19:01:54.125607  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:54.125617  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:54.125681  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:54.163025  152077 cri.go:89] found id: ""
	I0729 19:01:54.163053  152077 logs.go:276] 0 containers: []
	W0729 19:01:54.163066  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:54.163075  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:54.163140  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:54.208521  152077 cri.go:89] found id: ""
	I0729 19:01:54.208548  152077 logs.go:276] 0 containers: []
	W0729 19:01:54.208557  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:54.208567  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:54.208633  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:54.253574  152077 cri.go:89] found id: ""
	I0729 19:01:54.253605  152077 logs.go:276] 0 containers: []
	W0729 19:01:54.253616  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:54.253630  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:54.253646  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:54.318566  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:54.318602  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:54.336337  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:54.336372  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:54.406684  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:54.406712  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:54.406729  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:54.513347  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:54.513395  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:01:57.054492  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:01:57.070386  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:01:57.070452  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:01:57.118865  152077 cri.go:89] found id: ""
	I0729 19:01:57.118895  152077 logs.go:276] 0 containers: []
	W0729 19:01:57.118905  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:01:57.118913  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:01:57.118978  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:01:57.169718  152077 cri.go:89] found id: ""
	I0729 19:01:57.169747  152077 logs.go:276] 0 containers: []
	W0729 19:01:57.169758  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:01:57.169766  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:01:57.169826  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:01:57.219530  152077 cri.go:89] found id: ""
	I0729 19:01:57.219566  152077 logs.go:276] 0 containers: []
	W0729 19:01:57.219575  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:01:57.219582  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:01:57.219648  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:01:57.258028  152077 cri.go:89] found id: ""
	I0729 19:01:57.258055  152077 logs.go:276] 0 containers: []
	W0729 19:01:57.258066  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:01:57.258074  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:01:57.258142  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:01:57.295741  152077 cri.go:89] found id: ""
	I0729 19:01:57.295772  152077 logs.go:276] 0 containers: []
	W0729 19:01:57.295784  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:01:57.295791  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:01:57.295856  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:01:57.337215  152077 cri.go:89] found id: ""
	I0729 19:01:57.337246  152077 logs.go:276] 0 containers: []
	W0729 19:01:57.337258  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:01:57.337265  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:01:57.337313  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:01:57.375617  152077 cri.go:89] found id: ""
	I0729 19:01:57.375643  152077 logs.go:276] 0 containers: []
	W0729 19:01:57.375654  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:01:57.375661  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:01:57.375724  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:01:57.413511  152077 cri.go:89] found id: ""
	I0729 19:01:57.413540  152077 logs.go:276] 0 containers: []
	W0729 19:01:57.413550  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:01:57.413568  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:01:57.413590  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:01:57.468261  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:01:57.468297  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:01:57.484419  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:01:57.484462  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:01:57.561049  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:01:57.561075  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:01:57.561090  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:01:57.645555  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:01:57.645596  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:00.188479  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:00.206752  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:00.206832  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:00.253886  152077 cri.go:89] found id: ""
	I0729 19:02:00.253928  152077 logs.go:276] 0 containers: []
	W0729 19:02:00.253940  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:00.253948  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:00.254015  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:00.292361  152077 cri.go:89] found id: ""
	I0729 19:02:00.292400  152077 logs.go:276] 0 containers: []
	W0729 19:02:00.292408  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:00.292414  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:00.292473  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:00.331566  152077 cri.go:89] found id: ""
	I0729 19:02:00.331599  152077 logs.go:276] 0 containers: []
	W0729 19:02:00.331610  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:00.331618  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:00.331691  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:00.381104  152077 cri.go:89] found id: ""
	I0729 19:02:00.381130  152077 logs.go:276] 0 containers: []
	W0729 19:02:00.381141  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:00.381149  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:00.381209  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:00.420064  152077 cri.go:89] found id: ""
	I0729 19:02:00.420096  152077 logs.go:276] 0 containers: []
	W0729 19:02:00.420106  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:00.420114  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:00.420182  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:00.467635  152077 cri.go:89] found id: ""
	I0729 19:02:00.467677  152077 logs.go:276] 0 containers: []
	W0729 19:02:00.467687  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:00.467696  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:00.467761  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:00.510725  152077 cri.go:89] found id: ""
	I0729 19:02:00.510754  152077 logs.go:276] 0 containers: []
	W0729 19:02:00.510767  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:00.510776  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:00.510841  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:00.549390  152077 cri.go:89] found id: ""
	I0729 19:02:00.549422  152077 logs.go:276] 0 containers: []
	W0729 19:02:00.549434  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:00.549446  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:00.549465  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:00.611731  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:00.611771  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:00.627739  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:00.627772  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:00.704180  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:00.704203  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:00.704218  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:00.783843  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:00.783881  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:03.333262  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:03.349139  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:03.349211  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:03.385022  152077 cri.go:89] found id: ""
	I0729 19:02:03.385056  152077 logs.go:276] 0 containers: []
	W0729 19:02:03.385067  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:03.385076  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:03.385144  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:03.421569  152077 cri.go:89] found id: ""
	I0729 19:02:03.421601  152077 logs.go:276] 0 containers: []
	W0729 19:02:03.421613  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:03.421620  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:03.421669  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:03.466236  152077 cri.go:89] found id: ""
	I0729 19:02:03.466267  152077 logs.go:276] 0 containers: []
	W0729 19:02:03.466278  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:03.466287  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:03.466351  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:03.504121  152077 cri.go:89] found id: ""
	I0729 19:02:03.504155  152077 logs.go:276] 0 containers: []
	W0729 19:02:03.504166  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:03.504174  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:03.504249  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:03.555576  152077 cri.go:89] found id: ""
	I0729 19:02:03.555608  152077 logs.go:276] 0 containers: []
	W0729 19:02:03.555620  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:03.555629  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:03.555703  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:03.592008  152077 cri.go:89] found id: ""
	I0729 19:02:03.592044  152077 logs.go:276] 0 containers: []
	W0729 19:02:03.592056  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:03.592064  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:03.592132  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:03.629384  152077 cri.go:89] found id: ""
	I0729 19:02:03.629412  152077 logs.go:276] 0 containers: []
	W0729 19:02:03.629422  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:03.629431  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:03.629502  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:03.662970  152077 cri.go:89] found id: ""
	I0729 19:02:03.663003  152077 logs.go:276] 0 containers: []
	W0729 19:02:03.663013  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:03.663024  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:03.663036  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:03.676654  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:03.676681  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:03.748498  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:03.748518  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:03.748535  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:03.831622  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:03.831658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:03.872064  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:03.872100  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:06.425844  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:06.441407  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:06.441479  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:06.482518  152077 cri.go:89] found id: ""
	I0729 19:02:06.482540  152077 logs.go:276] 0 containers: []
	W0729 19:02:06.482549  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:06.482555  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:06.482602  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:06.518750  152077 cri.go:89] found id: ""
	I0729 19:02:06.518780  152077 logs.go:276] 0 containers: []
	W0729 19:02:06.518802  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:06.518811  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:06.518883  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:06.552449  152077 cri.go:89] found id: ""
	I0729 19:02:06.552475  152077 logs.go:276] 0 containers: []
	W0729 19:02:06.552485  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:06.552490  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:06.552541  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:06.587233  152077 cri.go:89] found id: ""
	I0729 19:02:06.587265  152077 logs.go:276] 0 containers: []
	W0729 19:02:06.587278  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:06.587289  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:06.587358  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:06.624693  152077 cri.go:89] found id: ""
	I0729 19:02:06.624726  152077 logs.go:276] 0 containers: []
	W0729 19:02:06.624738  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:06.624747  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:06.624814  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:06.661420  152077 cri.go:89] found id: ""
	I0729 19:02:06.661449  152077 logs.go:276] 0 containers: []
	W0729 19:02:06.661457  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:06.661464  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:06.661515  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:06.700165  152077 cri.go:89] found id: ""
	I0729 19:02:06.700197  152077 logs.go:276] 0 containers: []
	W0729 19:02:06.700209  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:06.700217  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:06.700278  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:06.740366  152077 cri.go:89] found id: ""
	I0729 19:02:06.740402  152077 logs.go:276] 0 containers: []
	W0729 19:02:06.740414  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:06.740427  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:06.740443  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:06.797278  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:06.797317  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:06.812402  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:06.812444  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:06.901415  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:06.901443  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:06.901458  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:06.990799  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:06.990838  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:09.544931  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:09.559246  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:09.559307  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:09.599235  152077 cri.go:89] found id: ""
	I0729 19:02:09.599260  152077 logs.go:276] 0 containers: []
	W0729 19:02:09.599268  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:09.599274  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:09.599340  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:09.638939  152077 cri.go:89] found id: ""
	I0729 19:02:09.638958  152077 logs.go:276] 0 containers: []
	W0729 19:02:09.638967  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:09.638976  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:09.639030  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:09.680558  152077 cri.go:89] found id: ""
	I0729 19:02:09.680582  152077 logs.go:276] 0 containers: []
	W0729 19:02:09.680592  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:09.680600  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:09.680653  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:09.733902  152077 cri.go:89] found id: ""
	I0729 19:02:09.733924  152077 logs.go:276] 0 containers: []
	W0729 19:02:09.733930  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:09.733936  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:09.733979  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:09.782516  152077 cri.go:89] found id: ""
	I0729 19:02:09.782539  152077 logs.go:276] 0 containers: []
	W0729 19:02:09.782549  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:09.782557  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:09.782616  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:09.831060  152077 cri.go:89] found id: ""
	I0729 19:02:09.831088  152077 logs.go:276] 0 containers: []
	W0729 19:02:09.831100  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:09.831107  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:09.831168  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:09.876003  152077 cri.go:89] found id: ""
	I0729 19:02:09.876032  152077 logs.go:276] 0 containers: []
	W0729 19:02:09.876043  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:09.876051  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:09.876115  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:09.912211  152077 cri.go:89] found id: ""
	I0729 19:02:09.912235  152077 logs.go:276] 0 containers: []
	W0729 19:02:09.912243  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:09.912255  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:09.912275  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:09.931524  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:09.931558  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:10.013746  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:10.013776  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:10.013790  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:10.096897  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:10.096945  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:10.143264  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:10.143296  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:12.697769  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:12.715501  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:12.715589  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:12.756419  152077 cri.go:89] found id: ""
	I0729 19:02:12.756445  152077 logs.go:276] 0 containers: []
	W0729 19:02:12.756454  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:12.756460  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:12.756510  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:12.790374  152077 cri.go:89] found id: ""
	I0729 19:02:12.790415  152077 logs.go:276] 0 containers: []
	W0729 19:02:12.790428  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:12.790436  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:12.790504  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:12.824703  152077 cri.go:89] found id: ""
	I0729 19:02:12.824732  152077 logs.go:276] 0 containers: []
	W0729 19:02:12.824743  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:12.824751  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:12.824831  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:12.863784  152077 cri.go:89] found id: ""
	I0729 19:02:12.863813  152077 logs.go:276] 0 containers: []
	W0729 19:02:12.863824  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:12.863833  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:12.863892  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:12.904733  152077 cri.go:89] found id: ""
	I0729 19:02:12.904772  152077 logs.go:276] 0 containers: []
	W0729 19:02:12.904785  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:12.904794  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:12.904872  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:12.943015  152077 cri.go:89] found id: ""
	I0729 19:02:12.943047  152077 logs.go:276] 0 containers: []
	W0729 19:02:12.943058  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:12.943065  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:12.943125  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:12.983189  152077 cri.go:89] found id: ""
	I0729 19:02:12.983219  152077 logs.go:276] 0 containers: []
	W0729 19:02:12.983230  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:12.983238  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:12.983301  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:13.017779  152077 cri.go:89] found id: ""
	I0729 19:02:13.017815  152077 logs.go:276] 0 containers: []
	W0729 19:02:13.017826  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:13.017840  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:13.017860  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:13.095111  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:13.095135  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:13.095150  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:13.176170  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:13.176203  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:13.216619  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:13.216651  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:13.266799  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:13.266833  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:15.782094  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:15.807017  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:15.807093  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:15.853163  152077 cri.go:89] found id: ""
	I0729 19:02:15.853191  152077 logs.go:276] 0 containers: []
	W0729 19:02:15.853199  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:15.853206  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:15.853267  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:15.887380  152077 cri.go:89] found id: ""
	I0729 19:02:15.887414  152077 logs.go:276] 0 containers: []
	W0729 19:02:15.887426  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:15.887434  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:15.887501  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:15.921776  152077 cri.go:89] found id: ""
	I0729 19:02:15.921824  152077 logs.go:276] 0 containers: []
	W0729 19:02:15.921844  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:15.921853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:15.921920  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:15.957707  152077 cri.go:89] found id: ""
	I0729 19:02:15.957735  152077 logs.go:276] 0 containers: []
	W0729 19:02:15.957744  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:15.957750  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:15.957804  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:15.995185  152077 cri.go:89] found id: ""
	I0729 19:02:15.995210  152077 logs.go:276] 0 containers: []
	W0729 19:02:15.995219  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:15.995225  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:15.995278  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:16.033357  152077 cri.go:89] found id: ""
	I0729 19:02:16.033405  152077 logs.go:276] 0 containers: []
	W0729 19:02:16.033424  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:16.033434  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:16.033499  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:16.066666  152077 cri.go:89] found id: ""
	I0729 19:02:16.066700  152077 logs.go:276] 0 containers: []
	W0729 19:02:16.066713  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:16.066721  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:16.066782  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:16.100013  152077 cri.go:89] found id: ""
	I0729 19:02:16.100040  152077 logs.go:276] 0 containers: []
	W0729 19:02:16.100050  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:16.100061  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:16.100074  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:16.153959  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:16.153995  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:16.167999  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:16.168027  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:16.239201  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:16.239225  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:16.239241  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:16.320110  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:16.320145  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:18.860171  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:18.878029  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:18.878107  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:18.914829  152077 cri.go:89] found id: ""
	I0729 19:02:18.914868  152077 logs.go:276] 0 containers: []
	W0729 19:02:18.914880  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:18.914887  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:18.914959  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:18.956511  152077 cri.go:89] found id: ""
	I0729 19:02:18.956540  152077 logs.go:276] 0 containers: []
	W0729 19:02:18.956550  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:18.956558  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:18.956625  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:18.996664  152077 cri.go:89] found id: ""
	I0729 19:02:18.996694  152077 logs.go:276] 0 containers: []
	W0729 19:02:18.996706  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:18.996713  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:18.996778  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:19.034474  152077 cri.go:89] found id: ""
	I0729 19:02:19.034505  152077 logs.go:276] 0 containers: []
	W0729 19:02:19.034517  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:19.034532  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:19.034607  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:19.074540  152077 cri.go:89] found id: ""
	I0729 19:02:19.074581  152077 logs.go:276] 0 containers: []
	W0729 19:02:19.074593  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:19.074602  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:19.074667  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:19.111974  152077 cri.go:89] found id: ""
	I0729 19:02:19.112000  152077 logs.go:276] 0 containers: []
	W0729 19:02:19.112010  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:19.112019  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:19.112088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:19.149000  152077 cri.go:89] found id: ""
	I0729 19:02:19.149035  152077 logs.go:276] 0 containers: []
	W0729 19:02:19.149047  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:19.149057  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:19.149129  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:19.182833  152077 cri.go:89] found id: ""
	I0729 19:02:19.182865  152077 logs.go:276] 0 containers: []
	W0729 19:02:19.182877  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:19.182888  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:19.182904  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:19.250166  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:19.250210  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:19.264702  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:19.264738  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:19.337531  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:19.337564  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:19.337582  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:19.435752  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:19.435788  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:21.988398  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:22.002085  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:22.002146  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:22.038846  152077 cri.go:89] found id: ""
	I0729 19:02:22.038876  152077 logs.go:276] 0 containers: []
	W0729 19:02:22.038884  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:22.038891  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:22.038941  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:22.075997  152077 cri.go:89] found id: ""
	I0729 19:02:22.076023  152077 logs.go:276] 0 containers: []
	W0729 19:02:22.076031  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:22.076037  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:22.076087  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:22.117518  152077 cri.go:89] found id: ""
	I0729 19:02:22.117555  152077 logs.go:276] 0 containers: []
	W0729 19:02:22.117565  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:22.117572  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:22.117640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:22.153936  152077 cri.go:89] found id: ""
	I0729 19:02:22.153962  152077 logs.go:276] 0 containers: []
	W0729 19:02:22.153971  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:22.153977  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:22.154026  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:22.190283  152077 cri.go:89] found id: ""
	I0729 19:02:22.190312  152077 logs.go:276] 0 containers: []
	W0729 19:02:22.190321  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:22.190328  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:22.190381  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:22.227262  152077 cri.go:89] found id: ""
	I0729 19:02:22.227291  152077 logs.go:276] 0 containers: []
	W0729 19:02:22.227301  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:22.227311  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:22.227438  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:22.265217  152077 cri.go:89] found id: ""
	I0729 19:02:22.265244  152077 logs.go:276] 0 containers: []
	W0729 19:02:22.265255  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:22.265263  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:22.265326  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:22.306440  152077 cri.go:89] found id: ""
	I0729 19:02:22.306466  152077 logs.go:276] 0 containers: []
	W0729 19:02:22.306473  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:22.306483  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:22.306503  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:22.354982  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:22.355014  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:22.368321  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:22.368346  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:22.436055  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:22.436073  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:22.436086  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:22.516055  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:22.516089  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:25.058056  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:25.071430  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:25.071487  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:25.107044  152077 cri.go:89] found id: ""
	I0729 19:02:25.107074  152077 logs.go:276] 0 containers: []
	W0729 19:02:25.107085  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:25.107093  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:25.107164  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:25.141264  152077 cri.go:89] found id: ""
	I0729 19:02:25.141292  152077 logs.go:276] 0 containers: []
	W0729 19:02:25.141300  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:25.141306  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:25.141360  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:25.174975  152077 cri.go:89] found id: ""
	I0729 19:02:25.175005  152077 logs.go:276] 0 containers: []
	W0729 19:02:25.175013  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:25.175020  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:25.175071  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:25.208765  152077 cri.go:89] found id: ""
	I0729 19:02:25.208790  152077 logs.go:276] 0 containers: []
	W0729 19:02:25.208799  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:25.208804  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:25.208877  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:25.242383  152077 cri.go:89] found id: ""
	I0729 19:02:25.242417  152077 logs.go:276] 0 containers: []
	W0729 19:02:25.242428  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:25.242436  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:25.242498  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:25.276246  152077 cri.go:89] found id: ""
	I0729 19:02:25.276278  152077 logs.go:276] 0 containers: []
	W0729 19:02:25.276288  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:25.276296  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:25.276355  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:25.309795  152077 cri.go:89] found id: ""
	I0729 19:02:25.309820  152077 logs.go:276] 0 containers: []
	W0729 19:02:25.309830  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:25.309847  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:25.309910  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:25.349561  152077 cri.go:89] found id: ""
	I0729 19:02:25.349585  152077 logs.go:276] 0 containers: []
	W0729 19:02:25.349594  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:25.349602  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:25.349616  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:25.398380  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:25.398412  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:25.411995  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:25.412026  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:25.489787  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:25.489811  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:25.489823  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:25.572796  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:25.572833  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:28.117146  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:28.131371  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:28.131444  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:28.168407  152077 cri.go:89] found id: ""
	I0729 19:02:28.168431  152077 logs.go:276] 0 containers: []
	W0729 19:02:28.168444  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:28.168451  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:28.168505  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:28.204527  152077 cri.go:89] found id: ""
	I0729 19:02:28.204557  152077 logs.go:276] 0 containers: []
	W0729 19:02:28.204565  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:28.204571  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:28.204622  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:28.239140  152077 cri.go:89] found id: ""
	I0729 19:02:28.239158  152077 logs.go:276] 0 containers: []
	W0729 19:02:28.239165  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:28.239175  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:28.239221  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:28.272745  152077 cri.go:89] found id: ""
	I0729 19:02:28.272776  152077 logs.go:276] 0 containers: []
	W0729 19:02:28.272785  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:28.272791  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:28.272850  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:28.307896  152077 cri.go:89] found id: ""
	I0729 19:02:28.307926  152077 logs.go:276] 0 containers: []
	W0729 19:02:28.307938  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:28.307947  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:28.308007  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:28.346283  152077 cri.go:89] found id: ""
	I0729 19:02:28.346310  152077 logs.go:276] 0 containers: []
	W0729 19:02:28.346321  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:28.346329  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:28.346388  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:28.390028  152077 cri.go:89] found id: ""
	I0729 19:02:28.390060  152077 logs.go:276] 0 containers: []
	W0729 19:02:28.390071  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:28.390083  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:28.390146  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:28.428824  152077 cri.go:89] found id: ""
	I0729 19:02:28.428873  152077 logs.go:276] 0 containers: []
	W0729 19:02:28.428887  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:28.428899  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:28.428917  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:28.492784  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:28.492831  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:28.511572  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:28.511605  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:28.589108  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:28.589130  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:28.589146  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:28.672512  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:28.672543  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:31.215019  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:31.227614  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:31.227695  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:31.263857  152077 cri.go:89] found id: ""
	I0729 19:02:31.263890  152077 logs.go:276] 0 containers: []
	W0729 19:02:31.263900  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:31.263909  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:31.263979  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:31.299916  152077 cri.go:89] found id: ""
	I0729 19:02:31.299979  152077 logs.go:276] 0 containers: []
	W0729 19:02:31.299992  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:31.300001  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:31.300071  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:31.335548  152077 cri.go:89] found id: ""
	I0729 19:02:31.335576  152077 logs.go:276] 0 containers: []
	W0729 19:02:31.335587  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:31.335595  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:31.335659  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:31.371811  152077 cri.go:89] found id: ""
	I0729 19:02:31.371840  152077 logs.go:276] 0 containers: []
	W0729 19:02:31.371852  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:31.371860  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:31.371926  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:31.405264  152077 cri.go:89] found id: ""
	I0729 19:02:31.405297  152077 logs.go:276] 0 containers: []
	W0729 19:02:31.405308  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:31.405317  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:31.405382  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:31.449336  152077 cri.go:89] found id: ""
	I0729 19:02:31.449362  152077 logs.go:276] 0 containers: []
	W0729 19:02:31.449373  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:31.449380  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:31.449441  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:31.485822  152077 cri.go:89] found id: ""
	I0729 19:02:31.485852  152077 logs.go:276] 0 containers: []
	W0729 19:02:31.485860  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:31.485867  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:31.485918  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:31.521549  152077 cri.go:89] found id: ""
	I0729 19:02:31.521586  152077 logs.go:276] 0 containers: []
	W0729 19:02:31.521597  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:31.521611  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:31.521636  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:31.536566  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:31.536590  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:31.616478  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:31.616500  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:31.616523  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:31.700795  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:31.700845  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:31.743660  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:31.743696  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:34.307213  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:34.324002  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:34.324063  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:34.362568  152077 cri.go:89] found id: ""
	I0729 19:02:34.362605  152077 logs.go:276] 0 containers: []
	W0729 19:02:34.362616  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:34.362625  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:34.362689  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:34.397147  152077 cri.go:89] found id: ""
	I0729 19:02:34.397179  152077 logs.go:276] 0 containers: []
	W0729 19:02:34.397190  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:34.397199  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:34.397266  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:34.433209  152077 cri.go:89] found id: ""
	I0729 19:02:34.433240  152077 logs.go:276] 0 containers: []
	W0729 19:02:34.433250  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:34.433258  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:34.433322  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:34.470597  152077 cri.go:89] found id: ""
	I0729 19:02:34.470623  152077 logs.go:276] 0 containers: []
	W0729 19:02:34.470645  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:34.470662  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:34.470725  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:34.504189  152077 cri.go:89] found id: ""
	I0729 19:02:34.504218  152077 logs.go:276] 0 containers: []
	W0729 19:02:34.504228  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:34.504236  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:34.504286  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:34.538399  152077 cri.go:89] found id: ""
	I0729 19:02:34.538430  152077 logs.go:276] 0 containers: []
	W0729 19:02:34.538442  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:34.538449  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:34.538515  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:34.576954  152077 cri.go:89] found id: ""
	I0729 19:02:34.576989  152077 logs.go:276] 0 containers: []
	W0729 19:02:34.577001  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:34.577010  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:34.577078  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:34.618922  152077 cri.go:89] found id: ""
	I0729 19:02:34.618953  152077 logs.go:276] 0 containers: []
	W0729 19:02:34.618962  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:34.618973  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:34.618986  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:34.670796  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:34.670828  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:34.684666  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:34.684697  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:34.753346  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:34.753373  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:34.753387  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:34.849536  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:34.849571  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:37.389241  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:37.404625  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:37.404705  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:37.444063  152077 cri.go:89] found id: ""
	I0729 19:02:37.444089  152077 logs.go:276] 0 containers: []
	W0729 19:02:37.444100  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:37.444108  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:37.444172  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:37.484231  152077 cri.go:89] found id: ""
	I0729 19:02:37.484259  152077 logs.go:276] 0 containers: []
	W0729 19:02:37.484271  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:37.484278  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:37.484340  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:37.521843  152077 cri.go:89] found id: ""
	I0729 19:02:37.521875  152077 logs.go:276] 0 containers: []
	W0729 19:02:37.521887  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:37.521895  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:37.521965  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:37.558918  152077 cri.go:89] found id: ""
	I0729 19:02:37.558950  152077 logs.go:276] 0 containers: []
	W0729 19:02:37.558963  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:37.558971  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:37.559042  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:37.595629  152077 cri.go:89] found id: ""
	I0729 19:02:37.595656  152077 logs.go:276] 0 containers: []
	W0729 19:02:37.595674  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:37.595682  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:37.595746  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:37.634155  152077 cri.go:89] found id: ""
	I0729 19:02:37.634183  152077 logs.go:276] 0 containers: []
	W0729 19:02:37.634193  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:37.634201  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:37.634266  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:37.670161  152077 cri.go:89] found id: ""
	I0729 19:02:37.670193  152077 logs.go:276] 0 containers: []
	W0729 19:02:37.670205  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:37.670214  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:37.670280  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:37.714735  152077 cri.go:89] found id: ""
	I0729 19:02:37.714764  152077 logs.go:276] 0 containers: []
	W0729 19:02:37.714774  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:37.714786  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:37.714802  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:37.778287  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:37.778326  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:37.792833  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:37.792876  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:37.864108  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:37.864131  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:37.864145  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:37.948328  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:37.948363  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:40.491556  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:40.504452  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:40.504527  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:40.539767  152077 cri.go:89] found id: ""
	I0729 19:02:40.539796  152077 logs.go:276] 0 containers: []
	W0729 19:02:40.539806  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:40.539815  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:40.539877  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:40.578112  152077 cri.go:89] found id: ""
	I0729 19:02:40.578145  152077 logs.go:276] 0 containers: []
	W0729 19:02:40.578156  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:40.578164  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:40.578225  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:40.616271  152077 cri.go:89] found id: ""
	I0729 19:02:40.616299  152077 logs.go:276] 0 containers: []
	W0729 19:02:40.616310  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:40.616320  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:40.616383  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:40.649432  152077 cri.go:89] found id: ""
	I0729 19:02:40.649462  152077 logs.go:276] 0 containers: []
	W0729 19:02:40.649474  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:40.649487  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:40.649555  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:40.683411  152077 cri.go:89] found id: ""
	I0729 19:02:40.683442  152077 logs.go:276] 0 containers: []
	W0729 19:02:40.683470  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:40.683479  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:40.683545  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:40.719502  152077 cri.go:89] found id: ""
	I0729 19:02:40.719537  152077 logs.go:276] 0 containers: []
	W0729 19:02:40.719552  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:40.719561  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:40.719632  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:40.753994  152077 cri.go:89] found id: ""
	I0729 19:02:40.754031  152077 logs.go:276] 0 containers: []
	W0729 19:02:40.754050  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:40.754059  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:40.754120  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:40.785672  152077 cri.go:89] found id: ""
	I0729 19:02:40.785696  152077 logs.go:276] 0 containers: []
	W0729 19:02:40.785704  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:40.785715  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:40.785732  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:40.823848  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:40.823875  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:40.876755  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:40.876795  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:40.891855  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:40.891890  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:40.960942  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:40.960963  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:40.960978  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:43.545804  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:43.563960  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:43.564040  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:43.608244  152077 cri.go:89] found id: ""
	I0729 19:02:43.608274  152077 logs.go:276] 0 containers: []
	W0729 19:02:43.608285  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:43.608293  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:43.608355  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:43.650975  152077 cri.go:89] found id: ""
	I0729 19:02:43.651008  152077 logs.go:276] 0 containers: []
	W0729 19:02:43.651020  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:43.651028  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:43.651095  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:43.684761  152077 cri.go:89] found id: ""
	I0729 19:02:43.684801  152077 logs.go:276] 0 containers: []
	W0729 19:02:43.684815  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:43.684824  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:43.684906  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:43.719316  152077 cri.go:89] found id: ""
	I0729 19:02:43.719349  152077 logs.go:276] 0 containers: []
	W0729 19:02:43.719360  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:43.719368  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:43.719431  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:43.754250  152077 cri.go:89] found id: ""
	I0729 19:02:43.754279  152077 logs.go:276] 0 containers: []
	W0729 19:02:43.754291  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:43.754299  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:43.754366  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:43.801642  152077 cri.go:89] found id: ""
	I0729 19:02:43.801684  152077 logs.go:276] 0 containers: []
	W0729 19:02:43.801693  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:43.801700  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:43.801763  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:43.849206  152077 cri.go:89] found id: ""
	I0729 19:02:43.849234  152077 logs.go:276] 0 containers: []
	W0729 19:02:43.849243  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:43.849251  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:43.849311  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:43.890934  152077 cri.go:89] found id: ""
	I0729 19:02:43.890966  152077 logs.go:276] 0 containers: []
	W0729 19:02:43.890978  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:43.890991  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:43.891007  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:43.968849  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:43.968895  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:44.011519  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:44.011558  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:44.062658  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:44.062696  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:44.077041  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:44.077076  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:44.168203  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:46.668890  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:46.684267  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:46.684328  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:46.724984  152077 cri.go:89] found id: ""
	I0729 19:02:46.725012  152077 logs.go:276] 0 containers: []
	W0729 19:02:46.725024  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:46.725032  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:46.725088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:46.763931  152077 cri.go:89] found id: ""
	I0729 19:02:46.763971  152077 logs.go:276] 0 containers: []
	W0729 19:02:46.763980  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:46.763986  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:46.764038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:46.802930  152077 cri.go:89] found id: ""
	I0729 19:02:46.802963  152077 logs.go:276] 0 containers: []
	W0729 19:02:46.802974  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:46.802983  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:46.803048  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:46.837181  152077 cri.go:89] found id: ""
	I0729 19:02:46.837211  152077 logs.go:276] 0 containers: []
	W0729 19:02:46.837220  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:46.837226  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:46.837284  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:46.873975  152077 cri.go:89] found id: ""
	I0729 19:02:46.874006  152077 logs.go:276] 0 containers: []
	W0729 19:02:46.874019  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:46.874029  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:46.874101  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:46.909242  152077 cri.go:89] found id: ""
	I0729 19:02:46.909269  152077 logs.go:276] 0 containers: []
	W0729 19:02:46.909279  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:46.909288  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:46.909349  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:46.952878  152077 cri.go:89] found id: ""
	I0729 19:02:46.952904  152077 logs.go:276] 0 containers: []
	W0729 19:02:46.952912  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:46.952917  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:46.952971  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:46.996453  152077 cri.go:89] found id: ""
	I0729 19:02:46.996496  152077 logs.go:276] 0 containers: []
	W0729 19:02:46.996516  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:46.996529  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:46.996552  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:47.059820  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:47.059856  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:47.082512  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:47.082544  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:47.171737  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:47.171767  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:47.171783  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:47.251615  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:47.251657  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:49.797099  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:49.812673  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:49.812744  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:49.847813  152077 cri.go:89] found id: ""
	I0729 19:02:49.847847  152077 logs.go:276] 0 containers: []
	W0729 19:02:49.847858  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:49.847867  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:49.847948  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:49.881577  152077 cri.go:89] found id: ""
	I0729 19:02:49.881610  152077 logs.go:276] 0 containers: []
	W0729 19:02:49.881621  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:49.881629  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:49.881692  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:49.915872  152077 cri.go:89] found id: ""
	I0729 19:02:49.915901  152077 logs.go:276] 0 containers: []
	W0729 19:02:49.915911  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:49.915920  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:49.915981  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:49.957640  152077 cri.go:89] found id: ""
	I0729 19:02:49.957676  152077 logs.go:276] 0 containers: []
	W0729 19:02:49.957687  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:49.957695  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:49.957762  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:49.992588  152077 cri.go:89] found id: ""
	I0729 19:02:49.992612  152077 logs.go:276] 0 containers: []
	W0729 19:02:49.992620  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:49.992626  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:49.992676  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:50.031723  152077 cri.go:89] found id: ""
	I0729 19:02:50.031744  152077 logs.go:276] 0 containers: []
	W0729 19:02:50.031753  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:50.031759  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:50.031829  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:50.068598  152077 cri.go:89] found id: ""
	I0729 19:02:50.068627  152077 logs.go:276] 0 containers: []
	W0729 19:02:50.068638  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:50.068646  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:50.068704  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:50.105065  152077 cri.go:89] found id: ""
	I0729 19:02:50.105097  152077 logs.go:276] 0 containers: []
	W0729 19:02:50.105109  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:50.105126  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:50.105141  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:50.184232  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:50.184256  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:50.184268  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:50.270560  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:50.270607  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:50.327893  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:50.327927  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:50.382952  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:50.382989  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:52.896884  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:52.910871  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:52.910966  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:52.946455  152077 cri.go:89] found id: ""
	I0729 19:02:52.946482  152077 logs.go:276] 0 containers: []
	W0729 19:02:52.946490  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:52.946496  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:52.946555  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:52.987737  152077 cri.go:89] found id: ""
	I0729 19:02:52.987763  152077 logs.go:276] 0 containers: []
	W0729 19:02:52.987772  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:52.987778  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:52.987827  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:53.028158  152077 cri.go:89] found id: ""
	I0729 19:02:53.028192  152077 logs.go:276] 0 containers: []
	W0729 19:02:53.028204  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:53.028212  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:53.028280  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:53.063460  152077 cri.go:89] found id: ""
	I0729 19:02:53.063491  152077 logs.go:276] 0 containers: []
	W0729 19:02:53.063507  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:53.063516  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:53.063582  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:53.095878  152077 cri.go:89] found id: ""
	I0729 19:02:53.095909  152077 logs.go:276] 0 containers: []
	W0729 19:02:53.095922  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:53.095929  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:53.095992  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:53.130495  152077 cri.go:89] found id: ""
	I0729 19:02:53.130526  152077 logs.go:276] 0 containers: []
	W0729 19:02:53.130538  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:53.130547  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:53.130614  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:53.176185  152077 cri.go:89] found id: ""
	I0729 19:02:53.176210  152077 logs.go:276] 0 containers: []
	W0729 19:02:53.176218  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:53.176225  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:53.176284  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:53.215431  152077 cri.go:89] found id: ""
	I0729 19:02:53.215459  152077 logs.go:276] 0 containers: []
	W0729 19:02:53.215471  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:53.215483  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:53.215499  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:53.267986  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:53.268024  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:53.282021  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:53.282050  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:53.361080  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:53.361113  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:53.361129  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:53.453568  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:53.453598  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:55.993913  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:56.013828  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:56.013912  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:56.066489  152077 cri.go:89] found id: ""
	I0729 19:02:56.066523  152077 logs.go:276] 0 containers: []
	W0729 19:02:56.066537  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:56.066546  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:56.066609  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:56.110879  152077 cri.go:89] found id: ""
	I0729 19:02:56.110908  152077 logs.go:276] 0 containers: []
	W0729 19:02:56.110919  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:56.110931  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:56.110993  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:56.164232  152077 cri.go:89] found id: ""
	I0729 19:02:56.164280  152077 logs.go:276] 0 containers: []
	W0729 19:02:56.164304  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:56.164313  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:56.164377  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:56.200894  152077 cri.go:89] found id: ""
	I0729 19:02:56.200931  152077 logs.go:276] 0 containers: []
	W0729 19:02:56.200944  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:56.200955  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:56.201020  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:56.236558  152077 cri.go:89] found id: ""
	I0729 19:02:56.236592  152077 logs.go:276] 0 containers: []
	W0729 19:02:56.236603  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:56.236609  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:56.236664  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:56.275959  152077 cri.go:89] found id: ""
	I0729 19:02:56.275992  152077 logs.go:276] 0 containers: []
	W0729 19:02:56.276000  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:56.276006  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:56.276066  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:56.310551  152077 cri.go:89] found id: ""
	I0729 19:02:56.310584  152077 logs.go:276] 0 containers: []
	W0729 19:02:56.310596  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:56.310605  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:56.310668  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:56.348233  152077 cri.go:89] found id: ""
	I0729 19:02:56.348263  152077 logs.go:276] 0 containers: []
	W0729 19:02:56.348275  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:56.348287  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:56.348301  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:56.432370  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:56.432417  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:56.472677  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:56.472725  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:02:56.534837  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:56.534877  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:56.549340  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:56.549374  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:56.622907  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:59.123702  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:02:59.138909  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:02:59.138999  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:02:59.177121  152077 cri.go:89] found id: ""
	I0729 19:02:59.177156  152077 logs.go:276] 0 containers: []
	W0729 19:02:59.177168  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:02:59.177177  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:02:59.177251  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:02:59.219570  152077 cri.go:89] found id: ""
	I0729 19:02:59.219607  152077 logs.go:276] 0 containers: []
	W0729 19:02:59.219619  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:02:59.219627  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:02:59.219697  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:02:59.260045  152077 cri.go:89] found id: ""
	I0729 19:02:59.260077  152077 logs.go:276] 0 containers: []
	W0729 19:02:59.260088  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:02:59.260096  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:02:59.260165  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:02:59.304053  152077 cri.go:89] found id: ""
	I0729 19:02:59.304084  152077 logs.go:276] 0 containers: []
	W0729 19:02:59.304095  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:02:59.304102  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:02:59.304167  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:02:59.340227  152077 cri.go:89] found id: ""
	I0729 19:02:59.340261  152077 logs.go:276] 0 containers: []
	W0729 19:02:59.340273  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:02:59.340284  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:02:59.340346  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:02:59.378042  152077 cri.go:89] found id: ""
	I0729 19:02:59.378073  152077 logs.go:276] 0 containers: []
	W0729 19:02:59.378084  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:02:59.378092  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:02:59.378167  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:02:59.411132  152077 cri.go:89] found id: ""
	I0729 19:02:59.411162  152077 logs.go:276] 0 containers: []
	W0729 19:02:59.411173  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:02:59.411181  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:02:59.411246  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:02:59.448387  152077 cri.go:89] found id: ""
	I0729 19:02:59.448411  152077 logs.go:276] 0 containers: []
	W0729 19:02:59.448422  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:02:59.448432  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:02:59.448448  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:02:59.461211  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:02:59.461233  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:02:59.533720  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:02:59.533747  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:02:59.533765  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:02:59.620323  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:02:59.620361  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:02:59.665172  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:02:59.665204  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:02.222085  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:02.236728  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:02.236787  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:02.273571  152077 cri.go:89] found id: ""
	I0729 19:03:02.273603  152077 logs.go:276] 0 containers: []
	W0729 19:03:02.273613  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:02.273627  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:02.273694  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:02.308973  152077 cri.go:89] found id: ""
	I0729 19:03:02.309003  152077 logs.go:276] 0 containers: []
	W0729 19:03:02.309013  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:02.309019  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:02.309078  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:02.342836  152077 cri.go:89] found id: ""
	I0729 19:03:02.342863  152077 logs.go:276] 0 containers: []
	W0729 19:03:02.342874  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:02.342881  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:02.342940  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:02.380839  152077 cri.go:89] found id: ""
	I0729 19:03:02.380884  152077 logs.go:276] 0 containers: []
	W0729 19:03:02.380896  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:02.380904  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:02.380967  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:02.415077  152077 cri.go:89] found id: ""
	I0729 19:03:02.415108  152077 logs.go:276] 0 containers: []
	W0729 19:03:02.415116  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:02.415122  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:02.415184  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:02.451126  152077 cri.go:89] found id: ""
	I0729 19:03:02.451155  152077 logs.go:276] 0 containers: []
	W0729 19:03:02.451166  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:02.451174  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:02.451239  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:02.486780  152077 cri.go:89] found id: ""
	I0729 19:03:02.486806  152077 logs.go:276] 0 containers: []
	W0729 19:03:02.486817  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:02.486825  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:02.486895  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:02.519440  152077 cri.go:89] found id: ""
	I0729 19:03:02.519475  152077 logs.go:276] 0 containers: []
	W0729 19:03:02.519486  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:02.519499  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:02.519513  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:02.572534  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:02.572567  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:02.586074  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:02.586100  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:02.654630  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:02.654658  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:02.654675  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:02.735145  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:02.735181  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:05.278097  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:05.293125  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:05.293196  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:05.327028  152077 cri.go:89] found id: ""
	I0729 19:03:05.327079  152077 logs.go:276] 0 containers: []
	W0729 19:03:05.327091  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:05.327099  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:05.327161  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:05.376012  152077 cri.go:89] found id: ""
	I0729 19:03:05.376042  152077 logs.go:276] 0 containers: []
	W0729 19:03:05.376053  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:05.376061  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:05.376124  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:05.409834  152077 cri.go:89] found id: ""
	I0729 19:03:05.409869  152077 logs.go:276] 0 containers: []
	W0729 19:03:05.409879  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:05.409887  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:05.409939  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:05.441838  152077 cri.go:89] found id: ""
	I0729 19:03:05.441875  152077 logs.go:276] 0 containers: []
	W0729 19:03:05.441886  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:05.441894  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:05.441962  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:05.474348  152077 cri.go:89] found id: ""
	I0729 19:03:05.474376  152077 logs.go:276] 0 containers: []
	W0729 19:03:05.474389  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:05.474396  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:05.474464  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:05.510201  152077 cri.go:89] found id: ""
	I0729 19:03:05.510229  152077 logs.go:276] 0 containers: []
	W0729 19:03:05.510239  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:05.510246  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:05.510300  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:05.546668  152077 cri.go:89] found id: ""
	I0729 19:03:05.546697  152077 logs.go:276] 0 containers: []
	W0729 19:03:05.546706  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:05.546712  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:05.546773  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:05.581011  152077 cri.go:89] found id: ""
	I0729 19:03:05.581036  152077 logs.go:276] 0 containers: []
	W0729 19:03:05.581044  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:05.581053  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:05.581067  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:05.664703  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:05.664738  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:05.707840  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:05.707874  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:05.757595  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:05.757633  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:05.770785  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:05.770811  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:05.835343  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:08.335966  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:08.349916  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:08.349984  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:08.385743  152077 cri.go:89] found id: ""
	I0729 19:03:08.385778  152077 logs.go:276] 0 containers: []
	W0729 19:03:08.385801  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:08.385809  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:08.385876  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:08.421325  152077 cri.go:89] found id: ""
	I0729 19:03:08.421350  152077 logs.go:276] 0 containers: []
	W0729 19:03:08.421359  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:08.421364  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:08.421413  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:08.456842  152077 cri.go:89] found id: ""
	I0729 19:03:08.456882  152077 logs.go:276] 0 containers: []
	W0729 19:03:08.456894  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:08.456901  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:08.456955  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:08.490790  152077 cri.go:89] found id: ""
	I0729 19:03:08.490821  152077 logs.go:276] 0 containers: []
	W0729 19:03:08.490832  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:08.490840  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:08.490903  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:08.525825  152077 cri.go:89] found id: ""
	I0729 19:03:08.525853  152077 logs.go:276] 0 containers: []
	W0729 19:03:08.525864  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:08.525877  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:08.525949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:08.559361  152077 cri.go:89] found id: ""
	I0729 19:03:08.559387  152077 logs.go:276] 0 containers: []
	W0729 19:03:08.559396  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:08.559402  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:08.559458  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:08.598553  152077 cri.go:89] found id: ""
	I0729 19:03:08.598579  152077 logs.go:276] 0 containers: []
	W0729 19:03:08.598587  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:08.598593  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:08.598657  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:08.635535  152077 cri.go:89] found id: ""
	I0729 19:03:08.635562  152077 logs.go:276] 0 containers: []
	W0729 19:03:08.635570  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:08.635580  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:08.635596  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:08.689190  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:08.689226  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:08.703958  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:08.703993  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:08.775767  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:08.775788  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:08.775806  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:08.852475  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:08.852509  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:11.391983  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:11.404905  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:11.404991  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:11.438669  152077 cri.go:89] found id: ""
	I0729 19:03:11.438694  152077 logs.go:276] 0 containers: []
	W0729 19:03:11.438703  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:11.438709  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:11.438769  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:11.473027  152077 cri.go:89] found id: ""
	I0729 19:03:11.473053  152077 logs.go:276] 0 containers: []
	W0729 19:03:11.473061  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:11.473066  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:11.473118  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:11.506162  152077 cri.go:89] found id: ""
	I0729 19:03:11.506190  152077 logs.go:276] 0 containers: []
	W0729 19:03:11.506200  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:11.506209  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:11.506271  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:11.546106  152077 cri.go:89] found id: ""
	I0729 19:03:11.546137  152077 logs.go:276] 0 containers: []
	W0729 19:03:11.546148  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:11.546157  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:11.546220  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:11.578105  152077 cri.go:89] found id: ""
	I0729 19:03:11.578138  152077 logs.go:276] 0 containers: []
	W0729 19:03:11.578149  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:11.578157  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:11.578221  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:11.612042  152077 cri.go:89] found id: ""
	I0729 19:03:11.612072  152077 logs.go:276] 0 containers: []
	W0729 19:03:11.612085  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:11.612094  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:11.612161  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:11.643835  152077 cri.go:89] found id: ""
	I0729 19:03:11.643862  152077 logs.go:276] 0 containers: []
	W0729 19:03:11.643878  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:11.643884  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:11.643935  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:11.676706  152077 cri.go:89] found id: ""
	I0729 19:03:11.676734  152077 logs.go:276] 0 containers: []
	W0729 19:03:11.676744  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:11.676757  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:11.676772  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:11.728996  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:11.729031  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:11.742740  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:11.742768  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:11.812384  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:11.812406  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:11.812423  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:11.894992  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:11.895035  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:14.432788  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:14.447761  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:14.447830  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:14.484825  152077 cri.go:89] found id: ""
	I0729 19:03:14.484864  152077 logs.go:276] 0 containers: []
	W0729 19:03:14.484875  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:14.484883  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:14.484930  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:14.522564  152077 cri.go:89] found id: ""
	I0729 19:03:14.522592  152077 logs.go:276] 0 containers: []
	W0729 19:03:14.522600  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:14.522606  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:14.522656  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:14.563264  152077 cri.go:89] found id: ""
	I0729 19:03:14.563291  152077 logs.go:276] 0 containers: []
	W0729 19:03:14.563301  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:14.563308  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:14.563369  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:14.596797  152077 cri.go:89] found id: ""
	I0729 19:03:14.596822  152077 logs.go:276] 0 containers: []
	W0729 19:03:14.596832  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:14.596839  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:14.596914  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:14.633715  152077 cri.go:89] found id: ""
	I0729 19:03:14.633744  152077 logs.go:276] 0 containers: []
	W0729 19:03:14.633753  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:14.633759  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:14.633809  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:14.672385  152077 cri.go:89] found id: ""
	I0729 19:03:14.672411  152077 logs.go:276] 0 containers: []
	W0729 19:03:14.672421  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:14.672428  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:14.672488  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:14.716211  152077 cri.go:89] found id: ""
	I0729 19:03:14.716236  152077 logs.go:276] 0 containers: []
	W0729 19:03:14.716256  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:14.716263  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:14.716318  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:14.753528  152077 cri.go:89] found id: ""
	I0729 19:03:14.753554  152077 logs.go:276] 0 containers: []
	W0729 19:03:14.753566  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:14.753579  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:14.753594  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:14.842826  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:14.842858  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:14.893771  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:14.893803  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:14.960112  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:14.960143  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:14.977017  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:14.977050  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:15.044783  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:17.545631  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:17.560742  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:17.560820  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:17.603065  152077 cri.go:89] found id: ""
	I0729 19:03:17.603106  152077 logs.go:276] 0 containers: []
	W0729 19:03:17.603118  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:17.603125  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:17.603192  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:17.649586  152077 cri.go:89] found id: ""
	I0729 19:03:17.649619  152077 logs.go:276] 0 containers: []
	W0729 19:03:17.649631  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:17.649638  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:17.649706  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:17.692544  152077 cri.go:89] found id: ""
	I0729 19:03:17.692576  152077 logs.go:276] 0 containers: []
	W0729 19:03:17.692595  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:17.692604  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:17.692673  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:17.732987  152077 cri.go:89] found id: ""
	I0729 19:03:17.733023  152077 logs.go:276] 0 containers: []
	W0729 19:03:17.733034  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:17.733042  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:17.733109  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:17.770099  152077 cri.go:89] found id: ""
	I0729 19:03:17.770136  152077 logs.go:276] 0 containers: []
	W0729 19:03:17.770149  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:17.770157  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:17.770218  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:17.805547  152077 cri.go:89] found id: ""
	I0729 19:03:17.805580  152077 logs.go:276] 0 containers: []
	W0729 19:03:17.805598  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:17.805607  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:17.805674  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:17.842648  152077 cri.go:89] found id: ""
	I0729 19:03:17.842679  152077 logs.go:276] 0 containers: []
	W0729 19:03:17.842691  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:17.842699  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:17.842768  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:17.880761  152077 cri.go:89] found id: ""
	I0729 19:03:17.880797  152077 logs.go:276] 0 containers: []
	W0729 19:03:17.880810  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:17.880824  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:17.880839  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:17.939115  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:17.939164  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:17.957199  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:17.957228  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:18.028395  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:18.028423  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:18.028439  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:18.114171  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:18.114219  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:20.666923  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:20.684920  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:20.684988  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:20.732686  152077 cri.go:89] found id: ""
	I0729 19:03:20.732724  152077 logs.go:276] 0 containers: []
	W0729 19:03:20.732737  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:20.732747  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:20.732816  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:20.777416  152077 cri.go:89] found id: ""
	I0729 19:03:20.777444  152077 logs.go:276] 0 containers: []
	W0729 19:03:20.777456  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:20.777463  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:20.777527  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:20.814837  152077 cri.go:89] found id: ""
	I0729 19:03:20.814865  152077 logs.go:276] 0 containers: []
	W0729 19:03:20.814874  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:20.814881  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:20.814940  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:20.856361  152077 cri.go:89] found id: ""
	I0729 19:03:20.856394  152077 logs.go:276] 0 containers: []
	W0729 19:03:20.856406  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:20.856413  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:20.856474  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:20.896522  152077 cri.go:89] found id: ""
	I0729 19:03:20.896546  152077 logs.go:276] 0 containers: []
	W0729 19:03:20.896556  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:20.896564  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:20.896626  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:20.933113  152077 cri.go:89] found id: ""
	I0729 19:03:20.933141  152077 logs.go:276] 0 containers: []
	W0729 19:03:20.933153  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:20.933162  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:20.933223  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:20.977661  152077 cri.go:89] found id: ""
	I0729 19:03:20.977698  152077 logs.go:276] 0 containers: []
	W0729 19:03:20.977709  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:20.977719  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:20.977797  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:21.014791  152077 cri.go:89] found id: ""
	I0729 19:03:21.014836  152077 logs.go:276] 0 containers: []
	W0729 19:03:21.014848  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:21.014862  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:21.014879  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:21.099071  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:21.099098  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:21.099120  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:21.181222  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:21.181255  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:21.236256  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:21.236295  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:21.289089  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:21.289120  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:23.806369  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:23.820597  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:23.820670  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:23.856491  152077 cri.go:89] found id: ""
	I0729 19:03:23.856519  152077 logs.go:276] 0 containers: []
	W0729 19:03:23.856530  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:23.856538  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:23.856598  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:23.892582  152077 cri.go:89] found id: ""
	I0729 19:03:23.892614  152077 logs.go:276] 0 containers: []
	W0729 19:03:23.892627  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:23.892635  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:23.892692  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:23.927781  152077 cri.go:89] found id: ""
	I0729 19:03:23.927814  152077 logs.go:276] 0 containers: []
	W0729 19:03:23.927831  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:23.927841  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:23.927915  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:23.963735  152077 cri.go:89] found id: ""
	I0729 19:03:23.963767  152077 logs.go:276] 0 containers: []
	W0729 19:03:23.963779  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:23.963787  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:23.963852  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:24.003359  152077 cri.go:89] found id: ""
	I0729 19:03:24.003388  152077 logs.go:276] 0 containers: []
	W0729 19:03:24.003399  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:24.003407  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:24.003470  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:24.041781  152077 cri.go:89] found id: ""
	I0729 19:03:24.041813  152077 logs.go:276] 0 containers: []
	W0729 19:03:24.041823  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:24.041831  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:24.041892  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:24.077589  152077 cri.go:89] found id: ""
	I0729 19:03:24.077617  152077 logs.go:276] 0 containers: []
	W0729 19:03:24.077626  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:24.077632  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:24.077691  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:24.114509  152077 cri.go:89] found id: ""
	I0729 19:03:24.114534  152077 logs.go:276] 0 containers: []
	W0729 19:03:24.114550  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:24.114559  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:24.114578  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:24.151257  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:24.151284  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:24.209589  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:24.209637  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:24.228792  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:24.228833  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:24.327536  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:24.327559  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:24.327577  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:26.915404  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:26.932438  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:26.932505  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:26.980078  152077 cri.go:89] found id: ""
	I0729 19:03:26.980112  152077 logs.go:276] 0 containers: []
	W0729 19:03:26.980125  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:26.980133  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:26.980201  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:27.018677  152077 cri.go:89] found id: ""
	I0729 19:03:27.018712  152077 logs.go:276] 0 containers: []
	W0729 19:03:27.018724  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:27.018732  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:27.018803  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:27.062735  152077 cri.go:89] found id: ""
	I0729 19:03:27.062768  152077 logs.go:276] 0 containers: []
	W0729 19:03:27.062777  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:27.062783  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:27.062838  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:27.107649  152077 cri.go:89] found id: ""
	I0729 19:03:27.107680  152077 logs.go:276] 0 containers: []
	W0729 19:03:27.107692  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:27.107699  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:27.107763  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:27.144808  152077 cri.go:89] found id: ""
	I0729 19:03:27.144847  152077 logs.go:276] 0 containers: []
	W0729 19:03:27.144869  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:27.144878  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:27.144943  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:27.182446  152077 cri.go:89] found id: ""
	I0729 19:03:27.182479  152077 logs.go:276] 0 containers: []
	W0729 19:03:27.182492  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:27.182500  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:27.182570  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:27.228930  152077 cri.go:89] found id: ""
	I0729 19:03:27.228972  152077 logs.go:276] 0 containers: []
	W0729 19:03:27.228985  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:27.228993  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:27.229058  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:27.266301  152077 cri.go:89] found id: ""
	I0729 19:03:27.266332  152077 logs.go:276] 0 containers: []
	W0729 19:03:27.266343  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:27.266356  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:27.266372  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:27.350411  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:27.350451  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:27.397500  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:27.397536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:27.449512  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:27.449545  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:27.466203  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:27.466245  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:27.563119  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:30.063460  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:30.081110  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:30.081195  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:30.123017  152077 cri.go:89] found id: ""
	I0729 19:03:30.123046  152077 logs.go:276] 0 containers: []
	W0729 19:03:30.123057  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:30.123065  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:30.123129  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:30.162253  152077 cri.go:89] found id: ""
	I0729 19:03:30.162285  152077 logs.go:276] 0 containers: []
	W0729 19:03:30.162297  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:30.162305  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:30.162365  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:30.202125  152077 cri.go:89] found id: ""
	I0729 19:03:30.202161  152077 logs.go:276] 0 containers: []
	W0729 19:03:30.202173  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:30.202182  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:30.202248  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:30.240223  152077 cri.go:89] found id: ""
	I0729 19:03:30.240254  152077 logs.go:276] 0 containers: []
	W0729 19:03:30.240264  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:30.240272  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:30.240335  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:30.280682  152077 cri.go:89] found id: ""
	I0729 19:03:30.280715  152077 logs.go:276] 0 containers: []
	W0729 19:03:30.280728  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:30.280736  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:30.280801  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:30.333582  152077 cri.go:89] found id: ""
	I0729 19:03:30.333613  152077 logs.go:276] 0 containers: []
	W0729 19:03:30.333625  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:30.333633  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:30.333695  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:30.372761  152077 cri.go:89] found id: ""
	I0729 19:03:30.372788  152077 logs.go:276] 0 containers: []
	W0729 19:03:30.372797  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:30.372803  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:30.372875  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:30.411864  152077 cri.go:89] found id: ""
	I0729 19:03:30.411897  152077 logs.go:276] 0 containers: []
	W0729 19:03:30.411909  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:30.411922  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:30.411934  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:30.496524  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:30.496564  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:30.540951  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:30.540985  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:30.611601  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:30.611651  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:30.626812  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:30.626848  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:30.707215  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:33.207958  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:33.222034  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:33.222100  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:33.261088  152077 cri.go:89] found id: ""
	I0729 19:03:33.261119  152077 logs.go:276] 0 containers: []
	W0729 19:03:33.261130  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:33.261139  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:33.261207  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:33.306434  152077 cri.go:89] found id: ""
	I0729 19:03:33.306460  152077 logs.go:276] 0 containers: []
	W0729 19:03:33.306469  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:33.306475  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:33.306532  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:33.342157  152077 cri.go:89] found id: ""
	I0729 19:03:33.342184  152077 logs.go:276] 0 containers: []
	W0729 19:03:33.342193  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:33.342199  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:33.342250  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:33.384361  152077 cri.go:89] found id: ""
	I0729 19:03:33.384394  152077 logs.go:276] 0 containers: []
	W0729 19:03:33.384403  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:33.384410  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:33.384472  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:33.420249  152077 cri.go:89] found id: ""
	I0729 19:03:33.420276  152077 logs.go:276] 0 containers: []
	W0729 19:03:33.420284  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:33.420290  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:33.420341  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:33.456543  152077 cri.go:89] found id: ""
	I0729 19:03:33.456575  152077 logs.go:276] 0 containers: []
	W0729 19:03:33.456586  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:33.456594  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:33.456656  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:33.495390  152077 cri.go:89] found id: ""
	I0729 19:03:33.495430  152077 logs.go:276] 0 containers: []
	W0729 19:03:33.495441  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:33.495450  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:33.495516  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:33.535632  152077 cri.go:89] found id: ""
	I0729 19:03:33.535662  152077 logs.go:276] 0 containers: []
	W0729 19:03:33.535670  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:33.535680  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:33.535697  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:33.616098  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:33.616149  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:33.657532  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:33.657569  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:33.712335  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:33.712375  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:33.727529  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:33.727570  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:33.799491  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:36.300251  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:36.314874  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:36.314954  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:36.349282  152077 cri.go:89] found id: ""
	I0729 19:03:36.349312  152077 logs.go:276] 0 containers: []
	W0729 19:03:36.349324  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:36.349332  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:36.349396  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:36.387062  152077 cri.go:89] found id: ""
	I0729 19:03:36.387095  152077 logs.go:276] 0 containers: []
	W0729 19:03:36.387106  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:36.387114  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:36.387180  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:36.424431  152077 cri.go:89] found id: ""
	I0729 19:03:36.424467  152077 logs.go:276] 0 containers: []
	W0729 19:03:36.424479  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:36.424488  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:36.424558  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:36.463348  152077 cri.go:89] found id: ""
	I0729 19:03:36.463378  152077 logs.go:276] 0 containers: []
	W0729 19:03:36.463389  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:36.463398  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:36.463461  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:36.507893  152077 cri.go:89] found id: ""
	I0729 19:03:36.507933  152077 logs.go:276] 0 containers: []
	W0729 19:03:36.507946  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:36.507954  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:36.508012  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:36.548691  152077 cri.go:89] found id: ""
	I0729 19:03:36.548721  152077 logs.go:276] 0 containers: []
	W0729 19:03:36.548733  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:36.548741  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:36.548812  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:36.588600  152077 cri.go:89] found id: ""
	I0729 19:03:36.588632  152077 logs.go:276] 0 containers: []
	W0729 19:03:36.588644  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:36.588652  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:36.588718  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:36.627066  152077 cri.go:89] found id: ""
	I0729 19:03:36.627100  152077 logs.go:276] 0 containers: []
	W0729 19:03:36.627114  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:36.627127  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:36.627144  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:36.712592  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:36.712640  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:36.769377  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:36.769418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:36.844756  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:36.844785  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:36.858673  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:36.858702  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:36.935468  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:39.435818  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:39.450349  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:03:39.450429  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:03:39.485334  152077 cri.go:89] found id: ""
	I0729 19:03:39.485366  152077 logs.go:276] 0 containers: []
	W0729 19:03:39.485377  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:03:39.485386  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:03:39.485451  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:03:39.528940  152077 cri.go:89] found id: ""
	I0729 19:03:39.528972  152077 logs.go:276] 0 containers: []
	W0729 19:03:39.528984  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:03:39.528992  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:03:39.529063  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:03:39.563407  152077 cri.go:89] found id: ""
	I0729 19:03:39.563431  152077 logs.go:276] 0 containers: []
	W0729 19:03:39.563439  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:03:39.563445  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:03:39.563496  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:03:39.614948  152077 cri.go:89] found id: ""
	I0729 19:03:39.614989  152077 logs.go:276] 0 containers: []
	W0729 19:03:39.615001  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:03:39.615010  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:03:39.615081  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:03:39.659935  152077 cri.go:89] found id: ""
	I0729 19:03:39.659960  152077 logs.go:276] 0 containers: []
	W0729 19:03:39.659968  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:03:39.659975  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:03:39.660046  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:03:39.703530  152077 cri.go:89] found id: ""
	I0729 19:03:39.703568  152077 logs.go:276] 0 containers: []
	W0729 19:03:39.703580  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:03:39.703590  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:03:39.703661  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:03:39.750931  152077 cri.go:89] found id: ""
	I0729 19:03:39.750963  152077 logs.go:276] 0 containers: []
	W0729 19:03:39.750976  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:03:39.750985  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:03:39.751051  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:03:39.798698  152077 cri.go:89] found id: ""
	I0729 19:03:39.798733  152077 logs.go:276] 0 containers: []
	W0729 19:03:39.798745  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:03:39.798758  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:03:39.798774  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:03:39.885177  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:03:39.885213  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:03:39.885229  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:03:39.969393  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:03:39.969433  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:03:40.012075  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:03:40.012108  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:03:40.067735  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:03:40.067773  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:03:42.581772  152077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:03:42.597622  152077 kubeadm.go:597] duration metric: took 4m4.404418554s to restartPrimaryControlPlane
	W0729 19:03:42.597709  152077 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:03:42.597742  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:03:47.758507  152077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.16073447s)
	I0729 19:03:47.758605  152077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:03:47.774081  152077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:03:47.785910  152077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:03:47.796767  152077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:03:47.796796  152077 kubeadm.go:157] found existing configuration files:
	
	I0729 19:03:47.796852  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:03:47.806684  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:03:47.806755  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:03:47.817125  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:03:47.826862  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:03:47.826949  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:03:47.837161  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:03:47.847989  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:03:47.848038  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:03:47.859156  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:03:47.868536  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:03:47.868603  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:03:47.879654  152077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:03:47.954229  152077 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:03:47.954286  152077 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:03:48.094712  152077 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:03:48.094910  152077 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:03:48.095055  152077 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:03:48.280621  152077 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:03:48.283329  152077 out.go:204]   - Generating certificates and keys ...
	I0729 19:03:48.283428  152077 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:03:48.283510  152077 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:03:48.283621  152077 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:03:48.283716  152077 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:03:48.283833  152077 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:03:48.283906  152077 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:03:48.283997  152077 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:03:48.284085  152077 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:03:48.284198  152077 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:03:48.284295  152077 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:03:48.284353  152077 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:03:48.284475  152077 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:03:48.410700  152077 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:03:48.956314  152077 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:03:49.042149  152077 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:03:49.152192  152077 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:03:49.168989  152077 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:03:49.169113  152077 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:03:49.169165  152077 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:03:49.316971  152077 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:03:49.318650  152077 out.go:204]   - Booting up control plane ...
	I0729 19:03:49.318747  152077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:03:49.318829  152077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:03:49.326729  152077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:03:49.329136  152077 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:03:49.332569  152077 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:04:29.334383  152077 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:04:29.334772  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:04:29.335063  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:04:34.335291  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:04:34.335493  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:04:44.336043  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:04:44.336313  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:05:04.337381  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:05:04.337578  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:05:44.339767  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:05:44.339974  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:05:44.340008  152077 kubeadm.go:310] 
	I0729 19:05:44.340078  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:05:44.340159  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:05:44.340185  152077 kubeadm.go:310] 
	I0729 19:05:44.340232  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:05:44.340280  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:05:44.340420  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:05:44.340432  152077 kubeadm.go:310] 
	I0729 19:05:44.340566  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:05:44.340600  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:05:44.340629  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:05:44.340635  152077 kubeadm.go:310] 
	I0729 19:05:44.340725  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:05:44.340879  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:05:44.340894  152077 kubeadm.go:310] 
	I0729 19:05:44.341016  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:05:44.341113  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:05:44.341219  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:05:44.341297  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:05:44.341304  152077 kubeadm.go:310] 
	I0729 19:05:44.341775  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:05:44.341868  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:05:44.341924  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 19:05:44.342102  152077 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 19:05:44.342202  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:05:44.805573  152077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:05:44.820028  152077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:05:44.830279  152077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:05:44.830300  152077 kubeadm.go:157] found existing configuration files:
	
	I0729 19:05:44.830350  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:05:44.841628  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:05:44.841675  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:05:44.853942  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:05:44.863518  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:05:44.863570  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:05:44.873566  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:05:44.882468  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:05:44.882513  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:05:44.892503  152077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:05:44.902624  152077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:05:44.902679  152077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:05:44.913692  152077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:05:44.984570  152077 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:05:44.984647  152077 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:05:45.138881  152077 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:05:45.139046  152077 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:05:45.139165  152077 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:05:45.314609  152077 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:05:45.316500  152077 out.go:204]   - Generating certificates and keys ...
	I0729 19:05:45.316602  152077 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:05:45.316674  152077 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:05:45.316780  152077 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:05:45.316891  152077 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:05:45.317007  152077 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:05:45.317090  152077 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:05:45.317188  152077 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:05:45.317286  152077 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:05:45.317384  152077 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:05:45.317500  152077 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:05:45.317540  152077 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:05:45.317594  152077 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:05:45.665389  152077 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:05:45.789025  152077 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:05:46.014686  152077 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:05:46.235249  152077 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:05:46.256693  152077 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:05:46.256804  152077 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:05:46.256904  152077 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:05:46.393431  152077 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:05:46.396119  152077 out.go:204]   - Booting up control plane ...
	I0729 19:05:46.396237  152077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:05:46.407477  152077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:05:46.408913  152077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:05:46.410497  152077 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:05:46.413085  152077 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:06:26.416123  152077 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:06:26.416246  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:06:26.416427  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:06:31.416760  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:06:31.417031  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:06:41.417643  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:06:41.417860  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:01.417812  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:01.418043  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417004  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:41.417303  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417326  152077 kubeadm.go:310] 
	I0729 19:07:41.417370  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:07:41.417434  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:07:41.417456  152077 kubeadm.go:310] 
	I0729 19:07:41.417514  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:07:41.417586  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:07:41.417720  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:07:41.417732  152077 kubeadm.go:310] 
	I0729 19:07:41.417870  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:07:41.417917  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:07:41.417972  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:07:41.417982  152077 kubeadm.go:310] 
	I0729 19:07:41.418104  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:07:41.418232  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:07:41.418247  152077 kubeadm.go:310] 
	I0729 19:07:41.418370  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:07:41.418477  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:07:41.418596  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:07:41.418696  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:07:41.418706  152077 kubeadm.go:310] 
	I0729 19:07:41.419442  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:07:41.419562  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:07:41.419660  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:07:41.419767  152077 kubeadm.go:394] duration metric: took 8m3.273724985s to StartCluster
	I0729 19:07:41.419853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:07:41.419923  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:07:41.466895  152077 cri.go:89] found id: ""
	I0729 19:07:41.466922  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.466933  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:07:41.466941  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:07:41.467013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:07:41.500836  152077 cri.go:89] found id: ""
	I0729 19:07:41.500876  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.500888  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:07:41.500896  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:07:41.500949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:07:41.534917  152077 cri.go:89] found id: ""
	I0729 19:07:41.534946  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.534958  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:07:41.534968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:07:41.535038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:07:41.570516  152077 cri.go:89] found id: ""
	I0729 19:07:41.570545  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.570556  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:07:41.570565  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:07:41.570640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:07:41.608833  152077 cri.go:89] found id: ""
	I0729 19:07:41.608881  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.608894  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:07:41.608902  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:07:41.608969  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:07:41.644079  152077 cri.go:89] found id: ""
	I0729 19:07:41.644114  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.644127  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:07:41.644136  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:07:41.644198  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:07:41.678991  152077 cri.go:89] found id: ""
	I0729 19:07:41.679019  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.679028  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:07:41.679035  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:07:41.679088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:07:41.722775  152077 cri.go:89] found id: ""
	I0729 19:07:41.722803  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.722815  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:07:41.722829  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:07:41.722857  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:07:41.841614  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:07:41.841641  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:07:41.841658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:07:41.945679  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:07:41.945716  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:07:41.984505  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:07:41.984536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:07:42.037376  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:07:42.037418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:07:42.051259  152077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:07:42.051310  152077 out.go:239] * 
	* 
	W0729 19:07:42.051369  152077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.051390  152077 out.go:239] * 
	* 
	W0729 19:07:42.052280  152077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:07:42.055255  152077 out.go:177] 
	W0729 19:07:42.056299  152077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.056362  152077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:07:42.056391  152077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:07:42.057745  152077 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-834964 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 2 (239.212148ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-834964 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-834964        | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-524369                  | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-524369 --memory=2200                     | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC | 29 Jul 24 19:05 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612270       | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 19:04 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-834964             | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-974855                              | cert-expiration-974855       | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:01 UTC |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-453780             | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-453780                  | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-453780 image list                           | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148539 | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | disable-driver-mounts-148539                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-368536            | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC | 29 Jul 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-368536                 | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:07:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:07:08.883432  156414 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:07:08.883769  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883781  156414 out.go:304] Setting ErrFile to fd 2...
	I0729 19:07:08.883788  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883976  156414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 19:07:08.884534  156414 out.go:298] Setting JSON to false
	I0729 19:07:08.885578  156414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13749,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:07:08.885642  156414 start.go:139] virtualization: kvm guest
	I0729 19:07:08.888023  156414 out.go:177] * [embed-certs-368536] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:07:08.889601  156414 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 19:07:08.889601  156414 notify.go:220] Checking for updates...
	I0729 19:07:08.892436  156414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:07:08.893966  156414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:07:08.895257  156414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 19:07:08.896516  156414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:07:08.897746  156414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:07:08.899225  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:07:08.899588  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.899642  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.914943  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0729 19:07:08.915307  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.915885  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.915905  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.916305  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.916486  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.916703  156414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:07:08.917034  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.917074  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.931497  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0729 19:07:08.931857  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.932280  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.932300  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.932640  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.932819  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.968386  156414 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:07:08.969538  156414 start.go:297] selected driver: kvm2
	I0729 19:07:08.969556  156414 start.go:901] validating driver "kvm2" against &{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.969681  156414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:07:08.970358  156414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.970428  156414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:07:08.986808  156414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:07:08.987271  156414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:07:08.987351  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:07:08.987370  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:07:08.987443  156414 start.go:340] cluster config:
	{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.987605  156414 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.989202  156414 out.go:177] * Starting "embed-certs-368536" primary control-plane node in "embed-certs-368536" cluster
	I0729 19:07:08.990452  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:07:08.990496  156414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:07:08.990510  156414 cache.go:56] Caching tarball of preloaded images
	I0729 19:07:08.990604  156414 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:07:08.990618  156414 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:07:08.990746  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:07:08.990949  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:07:08.990994  156414 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:07:08.991013  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:07:08.991023  156414 fix.go:54] fixHost starting: 
	I0729 19:07:08.991314  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.991356  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:09.006100  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 19:07:09.006507  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:09.007034  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:09.007060  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:09.007401  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:09.007594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.007758  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:07:09.009424  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Running err=<nil>
	W0729 19:07:09.009448  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:07:09.011240  156414 out.go:177] * Updating the running kvm2 "embed-certs-368536" VM ...
	I0729 19:07:09.012305  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:07:09.012324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.012506  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:07:09.014924  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015334  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:07:09.015362  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015507  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:07:09.015664  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015796  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:07:09.016106  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:07:09.016288  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:07:09.016300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:07:11.913157  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:14.985074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:21.065092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:24.137179  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:30.217186  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:33.289154  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:41.417004  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:41.417303  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417326  152077 kubeadm.go:310] 
	I0729 19:07:41.417370  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:07:41.417434  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:07:41.417456  152077 kubeadm.go:310] 
	I0729 19:07:41.417514  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:07:41.417586  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:07:41.417720  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:07:41.417732  152077 kubeadm.go:310] 
	I0729 19:07:41.417870  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:07:41.417917  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:07:41.417972  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:07:41.417982  152077 kubeadm.go:310] 
	I0729 19:07:41.418104  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:07:41.418232  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:07:41.418247  152077 kubeadm.go:310] 
	I0729 19:07:41.418370  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:07:41.418477  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:07:41.418596  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:07:41.418696  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:07:41.418706  152077 kubeadm.go:310] 
	I0729 19:07:41.419442  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:07:41.419562  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:07:41.419660  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:07:41.419767  152077 kubeadm.go:394] duration metric: took 8m3.273724985s to StartCluster
	I0729 19:07:41.419853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:07:41.419923  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:07:41.466895  152077 cri.go:89] found id: ""
	I0729 19:07:41.466922  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.466933  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:07:41.466941  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:07:41.467013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:07:41.500836  152077 cri.go:89] found id: ""
	I0729 19:07:41.500876  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.500888  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:07:41.500896  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:07:41.500949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:07:41.534917  152077 cri.go:89] found id: ""
	I0729 19:07:41.534946  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.534958  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:07:41.534968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:07:41.535038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:07:41.570516  152077 cri.go:89] found id: ""
	I0729 19:07:41.570545  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.570556  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:07:41.570565  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:07:41.570640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:07:41.608833  152077 cri.go:89] found id: ""
	I0729 19:07:41.608881  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.608894  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:07:41.608902  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:07:41.608969  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:07:41.644079  152077 cri.go:89] found id: ""
	I0729 19:07:41.644114  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.644127  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:07:41.644136  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:07:41.644198  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:07:41.678991  152077 cri.go:89] found id: ""
	I0729 19:07:41.679019  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.679028  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:07:41.679035  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:07:41.679088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:07:41.722775  152077 cri.go:89] found id: ""
	I0729 19:07:41.722803  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.722815  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:07:41.722829  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:07:41.722857  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:07:41.841614  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:07:41.841641  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:07:41.841658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:07:41.945679  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:07:41.945716  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:07:41.984505  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:07:41.984536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:07:42.037376  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:07:42.037418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:07:42.051259  152077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:07:42.051310  152077 out.go:239] * 
	W0729 19:07:42.051369  152077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.051390  152077 out.go:239] * 
	W0729 19:07:42.052280  152077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:07:42.055255  152077 out.go:177] 
	W0729 19:07:42.056299  152077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.056362  152077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:07:42.056391  152077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:07:42.057745  152077 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 19:07:42 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:42.966415238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280062966396907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57706c86-6ba1-4651-a4f0-76e3bde25dd5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:07:42 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:42.966957608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f9fc556-25d2-4a00-8ac5-bb8b5d7977a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:42 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:42.967023678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f9fc556-25d2-4a00-8ac5-bb8b5d7977a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:42 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:42.967055861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4f9fc556-25d2-4a00-8ac5-bb8b5d7977a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.000495619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9db530d4-1a1b-4411-9d28-0871c8159ff6 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.000579991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9db530d4-1a1b-4411-9d28-0871c8159ff6 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.001872915Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b487606c-907e-4c55-9061-4c5b275ede8c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.002234007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280063002203228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b487606c-907e-4c55-9061-4c5b275ede8c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.003125763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1167342e-204e-4d1e-962a-c8475ea31a1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.003177780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1167342e-204e-4d1e-962a-c8475ea31a1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.003207864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1167342e-204e-4d1e-962a-c8475ea31a1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.036185092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ac1b832-5f7f-45e6-8248-bf2b169aa512 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.036272064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ac1b832-5f7f-45e6-8248-bf2b169aa512 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.037416750Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=221cd1b0-c798-4a64-bd2c-59effe9182ec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.037835048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280063037814374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=221cd1b0-c798-4a64-bd2c-59effe9182ec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.038322177Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d26c7cbe-a747-450f-be3d-faff7f50a3a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.038377596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d26c7cbe-a747-450f-be3d-faff7f50a3a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.038407403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d26c7cbe-a747-450f-be3d-faff7f50a3a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.079769851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae3214e9-71e2-41f1-8666-f8fe7bd69b37 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.079873203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae3214e9-71e2-41f1-8666-f8fe7bd69b37 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.081294335Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e90eeee-dc55-4f0d-8ff0-f305cca3f6b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.081863064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280063081833557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e90eeee-dc55-4f0d-8ff0-f305cca3f6b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.082946420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40085058-e4dc-4bd4-8be1-59221c4233d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.083020408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40085058-e4dc-4bd4-8be1-59221c4233d5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:07:43 old-k8s-version-834964 crio[654]: time="2024-07-29 19:07:43.083069758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=40085058-e4dc-4bd4-8be1-59221c4233d5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057102] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044455] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.946781] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.486761] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.581640] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.337375] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.060537] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068240] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.202307] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.149657] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.264127] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +5.997739] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.070545] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.872502] systemd-fstab-generator[966]: Ignoring "noauto" option for root device
	[ +12.278643] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 19:03] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[Jul29 19:05] systemd-fstab-generator[5309]: Ignoring "noauto" option for root device
	[  +0.065530] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:07:43 up 8 min,  0 users,  load average: 0.00, 0.10, 0.07
	Linux old-k8s-version-834964 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]: net.(*sysDialer).dialSingle(0xc0009a0400, 0x4f7fe40, 0xc00042a0c0, 0x4f1ff00, 0xc0008b3da0, 0x0, 0x0, 0x0, 0x0)
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]: net.(*sysDialer).dialSerial(0xc0009a0400, 0x4f7fe40, 0xc00042a0c0, 0xc000a02370, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]:         /usr/local/go/src/net/dial.go:548 +0x152
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]: net.(*Dialer).DialContext(0xc000c4e240, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b90090, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c4dfe0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b90090, 0x24, 0x60, 0x7ff86421bc38, 0x118, ...)
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]: net/http.(*Transport).dial(0xc0008b0dc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b90090, 0x24, 0x0, 0xc00098d860, 0x4fe32c0, ...)
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]: net/http.(*Transport).dialConn(0xc0008b0dc0, 0x4f7fe00, 0xc000120018, 0x0, 0xc00001f200, 0x5, 0xc000b90090, 0x24, 0x0, 0xc00002e240, ...)
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]: net/http.(*Transport).dialConnFor(0xc0008b0dc0, 0xc000947ce0)
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]: created by net/http.(*Transport).queueForDial
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5490]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 29 19:07:41 old-k8s-version-834964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 29 19:07:41 old-k8s-version-834964 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 19:07:41 old-k8s-version-834964 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5535]: I0729 19:07:41.790945    5535 server.go:416] Version: v1.20.0
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5535]: I0729 19:07:41.791181    5535 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5535]: I0729 19:07:41.792920    5535 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5535]: W0729 19:07:41.793828    5535 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 19:07:41 old-k8s-version-834964 kubelet[5535]: I0729 19:07:41.794248    5535 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-834964 -n old-k8s-version-834964
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 2 (217.524035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-834964" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (723.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 19:13:30.730369338 +0000 UTC m=+6020.701073107
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-612270 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-612270 logs -n 25: (1.290311645s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-834964        | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-524369                  | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-524369 --memory=2200                     | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC | 29 Jul 24 19:05 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612270       | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 19:04 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-834964             | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-974855                              | cert-expiration-974855       | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:01 UTC |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-453780             | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-453780                  | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-453780 image list                           | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148539 | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | disable-driver-mounts-148539                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-368536            | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC | 29 Jul 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-368536                 | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:07:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:07:08.883432  156414 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:07:08.883769  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883781  156414 out.go:304] Setting ErrFile to fd 2...
	I0729 19:07:08.883788  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883976  156414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 19:07:08.884534  156414 out.go:298] Setting JSON to false
	I0729 19:07:08.885578  156414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13749,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:07:08.885642  156414 start.go:139] virtualization: kvm guest
	I0729 19:07:08.888023  156414 out.go:177] * [embed-certs-368536] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:07:08.889601  156414 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 19:07:08.889601  156414 notify.go:220] Checking for updates...
	I0729 19:07:08.892436  156414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:07:08.893966  156414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:07:08.895257  156414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 19:07:08.896516  156414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:07:08.897746  156414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:07:08.899225  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:07:08.899588  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.899642  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.914943  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0729 19:07:08.915307  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.915885  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.915905  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.916305  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.916486  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.916703  156414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:07:08.917034  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.917074  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.931497  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0729 19:07:08.931857  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.932280  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.932300  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.932640  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.932819  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.968386  156414 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:07:08.969538  156414 start.go:297] selected driver: kvm2
	I0729 19:07:08.969556  156414 start.go:901] validating driver "kvm2" against &{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.969681  156414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:07:08.970358  156414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.970428  156414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:07:08.986808  156414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:07:08.987271  156414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:07:08.987351  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:07:08.987370  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:07:08.987443  156414 start.go:340] cluster config:
	{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.987605  156414 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.989202  156414 out.go:177] * Starting "embed-certs-368536" primary control-plane node in "embed-certs-368536" cluster
	I0729 19:07:08.990452  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:07:08.990496  156414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:07:08.990510  156414 cache.go:56] Caching tarball of preloaded images
	I0729 19:07:08.990604  156414 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:07:08.990618  156414 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:07:08.990746  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:07:08.990949  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:07:08.990994  156414 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:07:08.991013  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:07:08.991023  156414 fix.go:54] fixHost starting: 
	I0729 19:07:08.991314  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.991356  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:09.006100  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 19:07:09.006507  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:09.007034  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:09.007060  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:09.007401  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:09.007594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.007758  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:07:09.009424  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Running err=<nil>
	W0729 19:07:09.009448  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:07:09.011240  156414 out.go:177] * Updating the running kvm2 "embed-certs-368536" VM ...
	I0729 19:07:09.012305  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:07:09.012324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.012506  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:07:09.014924  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015334  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:07:09.015362  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015507  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:07:09.015664  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015796  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:07:09.016106  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:07:09.016288  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:07:09.016300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:07:11.913157  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:14.985074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:21.065092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:24.137179  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:30.217186  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:33.289154  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:41.417004  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:41.417303  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417326  152077 kubeadm.go:310] 
	I0729 19:07:41.417370  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:07:41.417434  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:07:41.417456  152077 kubeadm.go:310] 
	I0729 19:07:41.417514  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:07:41.417586  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:07:41.417720  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:07:41.417732  152077 kubeadm.go:310] 
	I0729 19:07:41.417870  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:07:41.417917  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:07:41.417972  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:07:41.417982  152077 kubeadm.go:310] 
	I0729 19:07:41.418104  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:07:41.418232  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:07:41.418247  152077 kubeadm.go:310] 
	I0729 19:07:41.418370  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:07:41.418477  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:07:41.418596  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:07:41.418696  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:07:41.418706  152077 kubeadm.go:310] 
	I0729 19:07:41.419442  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:07:41.419562  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:07:41.419660  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:07:41.419767  152077 kubeadm.go:394] duration metric: took 8m3.273724985s to StartCluster
	I0729 19:07:41.419853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:07:41.419923  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:07:41.466895  152077 cri.go:89] found id: ""
	I0729 19:07:41.466922  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.466933  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:07:41.466941  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:07:41.467013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:07:41.500836  152077 cri.go:89] found id: ""
	I0729 19:07:41.500876  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.500888  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:07:41.500896  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:07:41.500949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:07:41.534917  152077 cri.go:89] found id: ""
	I0729 19:07:41.534946  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.534958  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:07:41.534968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:07:41.535038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:07:41.570516  152077 cri.go:89] found id: ""
	I0729 19:07:41.570545  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.570556  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:07:41.570565  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:07:41.570640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:07:41.608833  152077 cri.go:89] found id: ""
	I0729 19:07:41.608881  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.608894  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:07:41.608902  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:07:41.608969  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:07:41.644079  152077 cri.go:89] found id: ""
	I0729 19:07:41.644114  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.644127  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:07:41.644136  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:07:41.644198  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:07:41.678991  152077 cri.go:89] found id: ""
	I0729 19:07:41.679019  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.679028  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:07:41.679035  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:07:41.679088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:07:41.722775  152077 cri.go:89] found id: ""
	I0729 19:07:41.722803  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.722815  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:07:41.722829  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:07:41.722857  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:07:41.841614  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:07:41.841641  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:07:41.841658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:07:41.945679  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:07:41.945716  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:07:41.984505  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:07:41.984536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:07:42.037376  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:07:42.037418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:07:42.051259  152077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:07:42.051310  152077 out.go:239] * 
	W0729 19:07:42.051369  152077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.051390  152077 out.go:239] * 
	W0729 19:07:42.052280  152077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:07:42.055255  152077 out.go:177] 
	W0729 19:07:42.056299  152077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.056362  152077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:07:42.056391  152077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:07:42.057745  152077 out.go:177] 
	I0729 19:07:42.413268  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:45.481071  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:51.561079  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:54.633091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:00.713100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:03.785182  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:09.865116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:12.937102  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:19.017094  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:22.093191  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:28.169116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:31.241130  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:37.321134  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:40.393092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:46.473118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:49.545166  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:55.625086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:58.697184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:04.777113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:07.849165  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:13.933065  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:17.001100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:23.081133  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:26.153086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:32.233178  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:35.305183  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:41.385184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:44.461106  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:50.537120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:53.609150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:59.689091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:02.761193  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:08.845074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:11.917067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:17.993090  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:21.065137  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:27.145098  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:30.217175  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:36.301060  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:39.369118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:45.449082  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:48.521097  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:54.601120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:57.673200  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:03.753116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:06.825136  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:12.905195  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:15.977118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:22.057076  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:25.129144  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:31.209150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:34.281164  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:40.365067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:43.437113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:46.437452  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:11:46.437532  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.437865  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:11:46.437902  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.438117  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:11:46.439690  156414 machine.go:97] duration metric: took 4m37.427371174s to provisionDockerMachine
	I0729 19:11:46.439733  156414 fix.go:56] duration metric: took 4m37.448711854s for fixHost
	I0729 19:11:46.439746  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 4m37.448735558s
	W0729 19:11:46.439780  156414 start.go:714] error starting host: provision: host is not running
	W0729 19:11:46.439926  156414 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:11:46.439941  156414 start.go:729] Will try again in 5 seconds ...
	I0729 19:11:51.440155  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:11:51.440266  156414 start.go:364] duration metric: took 69.381µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:11:51.440311  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:11:51.440322  156414 fix.go:54] fixHost starting: 
	I0729 19:11:51.440735  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:11:51.440764  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:11:51.455959  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0729 19:11:51.456443  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:11:51.457053  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:11:51.457078  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:11:51.457475  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:11:51.457721  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:11:51.457885  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:11:51.459531  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Stopped err=<nil>
	I0729 19:11:51.459558  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	W0729 19:11:51.459736  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:11:51.461603  156414 out.go:177] * Restarting existing kvm2 VM for "embed-certs-368536" ...
	I0729 19:11:51.462768  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Start
	I0729 19:11:51.462973  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring networks are active...
	I0729 19:11:51.463679  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network default is active
	I0729 19:11:51.464155  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network mk-embed-certs-368536 is active
	I0729 19:11:51.464571  156414 main.go:141] libmachine: (embed-certs-368536) Getting domain xml...
	I0729 19:11:51.465359  156414 main.go:141] libmachine: (embed-certs-368536) Creating domain...
	I0729 19:11:51.794918  156414 main.go:141] libmachine: (embed-certs-368536) Waiting to get IP...
	I0729 19:11:51.795679  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:51.796159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:51.796221  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:51.796150  157493 retry.go:31] will retry after 235.729444ms: waiting for machine to come up
	I0729 19:11:52.033554  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.034180  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.034207  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.034103  157493 retry.go:31] will retry after 323.595446ms: waiting for machine to come up
	I0729 19:11:52.359640  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.360204  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.360233  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.360143  157493 retry.go:31] will retry after 341.954873ms: waiting for machine to come up
	I0729 19:11:52.703779  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.704350  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.704373  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.704298  157493 retry.go:31] will retry after 385.738264ms: waiting for machine to come up
	I0729 19:11:53.091976  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.092451  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.092484  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.092397  157493 retry.go:31] will retry after 759.811264ms: waiting for machine to come up
	I0729 19:11:53.853241  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.853799  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.853826  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.853755  157493 retry.go:31] will retry after 761.294244ms: waiting for machine to come up
	I0729 19:11:54.616298  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:54.617009  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:54.617039  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:54.616952  157493 retry.go:31] will retry after 1.056936741s: waiting for machine to come up
	I0729 19:11:55.675047  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:55.675491  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:55.675514  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:55.675442  157493 retry.go:31] will retry after 1.232760679s: waiting for machine to come up
	I0729 19:11:56.909745  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:56.910283  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:56.910309  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:56.910228  157493 retry.go:31] will retry after 1.432617964s: waiting for machine to come up
	I0729 19:11:58.344399  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:58.345006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:58.345033  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:58.344957  157493 retry.go:31] will retry after 1.914060621s: waiting for machine to come up
	I0729 19:12:00.262146  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:00.262707  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:00.262739  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:00.262638  157493 retry.go:31] will retry after 2.77447957s: waiting for machine to come up
	I0729 19:12:03.039059  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:03.039693  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:03.039718  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:03.039650  157493 retry.go:31] will retry after 2.755354142s: waiting for machine to come up
	I0729 19:12:05.797890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:05.798279  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:05.798306  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:05.798234  157493 retry.go:31] will retry after 4.501451096s: waiting for machine to come up
	I0729 19:12:10.304116  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304586  156414 main.go:141] libmachine: (embed-certs-368536) Found IP for machine: 192.168.50.95
	I0729 19:12:10.304616  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has current primary IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304626  156414 main.go:141] libmachine: (embed-certs-368536) Reserving static IP address...
	I0729 19:12:10.305096  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.305134  156414 main.go:141] libmachine: (embed-certs-368536) DBG | skip adding static IP to network mk-embed-certs-368536 - found existing host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"}
	I0729 19:12:10.305151  156414 main.go:141] libmachine: (embed-certs-368536) Reserved static IP address: 192.168.50.95
	I0729 19:12:10.305166  156414 main.go:141] libmachine: (embed-certs-368536) Waiting for SSH to be available...
	I0729 19:12:10.305184  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Getting to WaitForSSH function...
	I0729 19:12:10.307568  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.307936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.307972  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.308079  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH client type: external
	I0729 19:12:10.308110  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa (-rw-------)
	I0729 19:12:10.308140  156414 main.go:141] libmachine: (embed-certs-368536) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:12:10.308157  156414 main.go:141] libmachine: (embed-certs-368536) DBG | About to run SSH command:
	I0729 19:12:10.308170  156414 main.go:141] libmachine: (embed-certs-368536) DBG | exit 0
	I0729 19:12:10.436958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | SSH cmd err, output: <nil>: 
	I0729 19:12:10.437313  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetConfigRaw
	I0729 19:12:10.437962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.440164  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440520  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.440545  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440930  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:12:10.441184  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:12:10.441207  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:10.441430  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.443802  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444155  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.444187  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444375  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.444541  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444719  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444897  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.445064  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.445289  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.445300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:12:10.557371  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:12:10.557405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557675  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:12:10.557706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557905  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.560444  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.560793  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.560819  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.561027  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.561249  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561442  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.561783  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.561990  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.562006  156414 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-368536 && echo "embed-certs-368536" | sudo tee /etc/hostname
	I0729 19:12:10.688844  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-368536
	
	I0729 19:12:10.688891  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.691701  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.692102  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692293  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.692468  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692660  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692821  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.693024  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.693222  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.693245  156414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-368536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-368536/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-368536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:12:10.814346  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:12:10.814376  156414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 19:12:10.814400  156414 buildroot.go:174] setting up certificates
	I0729 19:12:10.814409  156414 provision.go:84] configureAuth start
	I0729 19:12:10.814417  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.814706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.817503  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.817859  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.817886  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.818025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.820077  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820443  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.820473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820617  156414 provision.go:143] copyHostCerts
	I0729 19:12:10.820703  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 19:12:10.820713  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 19:12:10.820781  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 19:12:10.820905  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 19:12:10.820914  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 19:12:10.820943  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 19:12:10.821005  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 19:12:10.821013  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 19:12:10.821041  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 19:12:10.821115  156414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.embed-certs-368536 san=[127.0.0.1 192.168.50.95 embed-certs-368536 localhost minikube]
	I0729 19:12:10.867595  156414 provision.go:177] copyRemoteCerts
	I0729 19:12:10.867661  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:12:10.867685  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.870501  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.870857  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.870881  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.871063  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.871267  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.871428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.871595  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:10.956269  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 19:12:10.981554  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:12:11.006055  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:12:11.031709  156414 provision.go:87] duration metric: took 217.286178ms to configureAuth
	I0729 19:12:11.031746  156414 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:12:11.031924  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:12:11.032088  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.034772  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035129  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.035159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.035583  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035737  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035863  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.036016  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.036203  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.036218  156414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:12:11.321782  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:12:11.321810  156414 machine.go:97] duration metric: took 880.608535ms to provisionDockerMachine
	I0729 19:12:11.321823  156414 start.go:293] postStartSetup for "embed-certs-368536" (driver="kvm2")
	I0729 19:12:11.321834  156414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:12:11.321850  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.322243  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:12:11.322281  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.325081  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325437  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.325459  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325612  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.325841  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.326025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.326177  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.412548  156414 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:12:11.417360  156414 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:12:11.417386  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 19:12:11.417466  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 19:12:11.417538  156414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 19:12:11.417632  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:12:11.429176  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:11.457333  156414 start.go:296] duration metric: took 135.490005ms for postStartSetup
	I0729 19:12:11.457401  156414 fix.go:56] duration metric: took 20.017075153s for fixHost
	I0729 19:12:11.457428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.460243  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460653  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.460702  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460873  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.461060  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461233  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461408  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.461586  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.461763  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.461779  156414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:12:11.573697  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722280331.531987438
	
	I0729 19:12:11.573724  156414 fix.go:216] guest clock: 1722280331.531987438
	I0729 19:12:11.573735  156414 fix.go:229] Guest: 2024-07-29 19:12:11.531987438 +0000 UTC Remote: 2024-07-29 19:12:11.457406225 +0000 UTC m=+302.608153452 (delta=74.581213ms)
	I0729 19:12:11.573758  156414 fix.go:200] guest clock delta is within tolerance: 74.581213ms
	I0729 19:12:11.573763  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 20.133485011s
	I0729 19:12:11.573782  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.574056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:11.576405  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576760  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.576798  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576988  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577479  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577644  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577737  156414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:12:11.577799  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.577840  156414 ssh_runner.go:195] Run: cat /version.json
	I0729 19:12:11.577869  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.580473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580564  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580840  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580921  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.581056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581203  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581358  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581369  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581554  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581564  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581669  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.581732  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.662254  156414 ssh_runner.go:195] Run: systemctl --version
	I0729 19:12:11.688214  156414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:12:11.838322  156414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:12:11.844435  156414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:12:11.844521  156414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:12:11.859899  156414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:12:11.859923  156414 start.go:495] detecting cgroup driver to use...
	I0729 19:12:11.859990  156414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:12:11.876768  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:12:11.890508  156414 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:12:11.890584  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:12:11.904817  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:12:11.919251  156414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:12:12.053205  156414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:12:12.205975  156414 docker.go:233] disabling docker service ...
	I0729 19:12:12.206041  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:12:12.222129  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:12:12.235288  156414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:12:12.386940  156414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:12:12.503688  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:12:12.518539  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:12:12.538744  156414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:12:12.538805  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.549615  156414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:12:12.549683  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.560565  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.571814  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.582852  156414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:12:12.595237  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.607232  156414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.626183  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.637202  156414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:12:12.647141  156414 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:12:12.647204  156414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:12:12.661099  156414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:12:12.671936  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:12.795418  156414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:12:12.934125  156414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:12:12.934220  156414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:12:12.939058  156414 start.go:563] Will wait 60s for crictl version
	I0729 19:12:12.939123  156414 ssh_runner.go:195] Run: which crictl
	I0729 19:12:12.943972  156414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:12:12.990564  156414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:12:12.990672  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.018852  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.053593  156414 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:12:13.054890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:13.057601  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.057994  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:13.058025  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.058229  156414 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:12:13.062303  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:13.075815  156414 kubeadm.go:883] updating cluster {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:12:13.075989  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:12:13.076042  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:13.112073  156414 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:12:13.112136  156414 ssh_runner.go:195] Run: which lz4
	I0729 19:12:13.116209  156414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:12:13.120509  156414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:12:13.120545  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:12:14.521429  156414 crio.go:462] duration metric: took 1.405252765s to copy over tarball
	I0729 19:12:14.521516  156414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:12:16.740086  156414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218529852s)
	I0729 19:12:16.740129  156414 crio.go:469] duration metric: took 2.218665992s to extract the tarball
	I0729 19:12:16.740140  156414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:12:16.779302  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:16.825787  156414 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:12:16.825823  156414 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:12:16.825832  156414 kubeadm.go:934] updating node { 192.168.50.95 8443 v1.30.3 crio true true} ...
	I0729 19:12:16.825972  156414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-368536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:12:16.826034  156414 ssh_runner.go:195] Run: crio config
	I0729 19:12:16.873701  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:16.873734  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:16.873752  156414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:12:16.873776  156414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-368536 NodeName:embed-certs-368536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:12:16.873923  156414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-368536"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:12:16.873987  156414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:12:16.884427  156414 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:12:16.884530  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:12:16.895097  156414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 19:12:16.914112  156414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:12:16.931797  156414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 19:12:16.949824  156414 ssh_runner.go:195] Run: grep 192.168.50.95	control-plane.minikube.internal$ /etc/hosts
	I0729 19:12:16.953765  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:16.967138  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:17.088789  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:12:17.108577  156414 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536 for IP: 192.168.50.95
	I0729 19:12:17.108601  156414 certs.go:194] generating shared ca certs ...
	I0729 19:12:17.108623  156414 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:12:17.108831  156414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 19:12:17.109076  156414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 19:12:17.109118  156414 certs.go:256] generating profile certs ...
	I0729 19:12:17.109296  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/client.key
	I0729 19:12:17.109394  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key.77ca8755
	I0729 19:12:17.109448  156414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key
	I0729 19:12:17.109619  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 19:12:17.109651  156414 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 19:12:17.109661  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 19:12:17.109688  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 19:12:17.109722  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:12:17.109756  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 19:12:17.109819  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:17.110926  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:12:17.142003  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 19:12:17.176798  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:12:17.204272  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 19:12:17.244558  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:12:17.281935  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:12:17.306456  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:12:17.332576  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:12:17.357372  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 19:12:17.380363  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 19:12:17.403737  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:12:17.428493  156414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:12:17.445767  156414 ssh_runner.go:195] Run: openssl version
	I0729 19:12:17.451514  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 19:12:17.463663  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469467  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469523  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.475754  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:12:17.487617  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:12:17.498699  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503205  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503257  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.508880  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:12:17.519570  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 19:12:17.530630  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535004  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535046  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.540647  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 19:12:17.551719  156414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:12:17.556199  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:12:17.562469  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:12:17.568561  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:12:17.574653  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:12:17.580458  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:12:17.586392  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:12:17.592304  156414 kubeadm.go:392] StartCluster: {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:12:17.592422  156414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:12:17.592467  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.631628  156414 cri.go:89] found id: ""
	I0729 19:12:17.631701  156414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:12:17.642636  156414 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:12:17.642665  156414 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:12:17.642731  156414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:12:17.652551  156414 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:12:17.653603  156414 kubeconfig.go:125] found "embed-certs-368536" server: "https://192.168.50.95:8443"
	I0729 19:12:17.656022  156414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:12:17.666987  156414 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.95
	I0729 19:12:17.667014  156414 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:12:17.667026  156414 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:12:17.667065  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.709534  156414 cri.go:89] found id: ""
	I0729 19:12:17.709598  156414 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:12:17.729709  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:12:17.739968  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:12:17.739990  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:12:17.740051  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:12:17.749727  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:12:17.749794  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:12:17.760013  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:12:17.781070  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:12:17.781135  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:12:17.794747  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.805862  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:12:17.805938  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.815892  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:12:17.825005  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:12:17.825072  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:12:17.834745  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:12:17.844191  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:17.963586  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.325254  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.361626819s)
	I0729 19:12:19.325289  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.537565  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.611697  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.710291  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:12:19.710408  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.210809  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.711234  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.727361  156414 api_server.go:72] duration metric: took 1.017067714s to wait for apiserver process to appear ...
	I0729 19:12:20.727396  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:12:20.727432  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:20.727961  156414 api_server.go:269] stopped: https://192.168.50.95:8443/healthz: Get "https://192.168.50.95:8443/healthz": dial tcp 192.168.50.95:8443: connect: connection refused
	I0729 19:12:21.228408  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.622048  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.622093  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.622106  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.628174  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.628195  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.728403  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.732920  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:23.732969  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.227954  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.233109  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.233143  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.727641  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.735127  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.735156  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.227686  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.236914  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.236947  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.728500  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.735276  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.735307  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.227824  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.232439  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.232471  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.728521  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.732952  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.732989  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:27.227504  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:27.232166  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:12:27.238505  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:12:27.238531  156414 api_server.go:131] duration metric: took 6.511128129s to wait for apiserver health ...
	I0729 19:12:27.238541  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:27.238560  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:27.240943  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:12:27.242526  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:12:27.254578  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:12:27.274700  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:12:27.289359  156414 system_pods.go:59] 8 kube-system pods found
	I0729 19:12:27.289412  156414 system_pods.go:61] "coredns-7db6d8ff4d-dww2j" [31418aeb-98be-4b43-a687-5c8f64ec5e9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:12:27.289424  156414 system_pods.go:61] "etcd-embed-certs-368536" [0a7aac00-9161-473a-87f1-2702211beaac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:12:27.289443  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [d6638a87-43df-457e-b335-1ab2aa03f421] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:12:27.289469  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [76fefc23-7794-40fc-845a-73d83eb55450] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:12:27.289478  156414 system_pods.go:61] "kube-proxy-9xwt2" [f637605b-9bc2-4922-b801-04f681a81e7c] Running
	I0729 19:12:27.289485  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [476e1d42-6610-44b5-b77b-b25537f044eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:12:27.289495  156414 system_pods.go:61] "metrics-server-569cc877fc-xnkwq" [19328b03-8c7d-499f-9a30-31f023605e49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:12:27.289500  156414 system_pods.go:61] "storage-provisioner" [b049d065-fbb1-48b6-a377-2938a0519c78] Running
	I0729 19:12:27.289513  156414 system_pods.go:74] duration metric: took 14.789212ms to wait for pod list to return data ...
	I0729 19:12:27.289525  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:12:27.292692  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:12:27.292716  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:12:27.292809  156414 node_conditions.go:105] duration metric: took 3.278515ms to run NodePressure ...
	I0729 19:12:27.292825  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:27.563167  156414 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567532  156414 kubeadm.go:739] kubelet initialised
	I0729 19:12:27.567552  156414 kubeadm.go:740] duration metric: took 4.363687ms waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567566  156414 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:12:27.573549  156414 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:29.580511  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:32.080352  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:34.079542  156414 pod_ready.go:92] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:34.079564  156414 pod_ready.go:81] duration metric: took 6.505994438s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:34.079574  156414 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:36.086785  156414 pod_ready.go:102] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:37.089587  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.089611  156414 pod_ready.go:81] duration metric: took 3.010030448s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.089621  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093814  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.093833  156414 pod_ready.go:81] duration metric: took 4.206508ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093842  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097603  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.097621  156414 pod_ready.go:81] duration metric: took 3.772149ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097633  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101696  156414 pod_ready.go:92] pod "kube-proxy-9xwt2" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.101720  156414 pod_ready.go:81] duration metric: took 4.078653ms for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101732  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107711  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.107728  156414 pod_ready.go:81] duration metric: took 5.989066ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107738  156414 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:39.113796  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:41.115401  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:43.614461  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:45.614900  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:48.113738  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:50.114364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:52.114790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:54.613472  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:56.613889  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:59.114206  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:01.614516  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:04.114859  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:06.114993  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:08.615213  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:11.114015  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:13.114342  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:15.614423  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:18.114442  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:20.614465  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:23.117909  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:25.614155  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:28.114746  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.354000411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280411353975954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=978dbef6-b818-4b4a-b3ed-91c4e02743e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.354477591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2116fe6-4efd-44d5-8a41-5fb8d3101d1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.354589247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2116fe6-4efd-44d5-8a41-5fb8d3101d1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.354801289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c0d04e529681ad3b2d96d9fb1f982535ee7bc1ef1a06f73a71808947a2f58a,PodSandboxId:b27aa50a26aea1055b171f58d7f7034d660c10d5fea3aa02de11cee94a89fc53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279869983313509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60704471-4b2f-4434-97fd-7b84419c8a24,},Annotations:map[string]string{io.kubernetes.container.hash: c60dbb11,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684d8e303d3ed944c2ab32a160cd75c2716a57aebd11538f28436bce0248a81,PodSandboxId:3cda19ea2b809b678c712d512c491429b6cebd1535c32cd820ab30d06aba1791,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869534091468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t4jjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d048fd15-d145-4fc9-8089-55972dfd052e,},Annotations:map[string]string{io.kubernetes.container.hash: 2db1f715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290e296b3fe0b8b28984f3dbede0d158c2d4ce5792e4722a4bfdb063b52b0278,PodSandboxId:b63077d49289f41b892ac67087f984c5b10386bbcb5180462101bf74696a6eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869284172958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vd7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e277e67a-e2b3-4c4c-a945-b46e419365c5,},Annotations:map[string]string{io.kubernetes.container.hash: 58f6aec4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a59b0de6efaa78f0541e0afbe618e250474f605d2f01a72c0fa968931ea9555,PodSandboxId:6752afc5607184af3635daa9d9defb5e3ed0f34f2db225b3a0ebcc74e7928883,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722279869008834295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9da3b5-0010-48fe-b349-6880fdd5404f,},Annotations:map[string]string{io.kubernetes.container.hash: 82601ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3903a83fea545f12af1e31df128b43986f1e4f43bcc4f64cc7850bf2c6fdd95,PodSandboxId:e914858338e32df94dc1457bab74e692c8c4eb0b68643d013725003e08a794fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722279849
469637536,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f951d43ab98b43cec81bd4e25c35a8b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dee6822734aba300b98d475602db7ed7f5ed88ad559dc847eb74fae4ff90c54,PodSandboxId:82515bbf9d419412d248b3215e2999e23f27450e34f192fc73adf227fc4e3e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Crea
tedAt:1722279849374721700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64814ead93f163f1c678d369cede4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd7f00e83588b3df64a5bf3c2a1fbd318a08b41926d3b4c4dad87656ca1bfc9,PodSandboxId:b470e0883ff059cfa22c9497b26ea020590a11a8d71eae1c8d41ff1ffaf5757b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Create
dAt:1722279849387772112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c761942700d84b93bcf0430e804257ea9c7618b8b0a62e64f28919db7f2c63fc,PodSandboxId:73dd6a50b62ae6f898d509ccf66bf4339b1030fcf981da0f8b98d91358c188a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17222798
49298410768,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36386695c0c8b1d1a591dbcf3cd8518f,},Annotations:map[string]string{io.kubernetes.container.hash: 700a48ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18090bf0aba3683f313f3a6464d1c6a5903552e91110ee4ada32637a0a180b5,PodSandboxId:269536abda6f94b4823bd59c9e9043f0ea3ed4750de50933a0222a570427059f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279559234895967,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2116fe6-4efd-44d5-8a41-5fb8d3101d1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.399911402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59a99a21-6aa3-46af-a8da-bf4df55e4625 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.400003165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59a99a21-6aa3-46af-a8da-bf4df55e4625 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.401923350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef644665-96d8-4ade-8bcc-d07d744b2172 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.402321558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280411402298683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef644665-96d8-4ade-8bcc-d07d744b2172 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.402997263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f19fc58-be79-468d-abf4-701b6c66a9c7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.403081010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f19fc58-be79-468d-abf4-701b6c66a9c7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.403340399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c0d04e529681ad3b2d96d9fb1f982535ee7bc1ef1a06f73a71808947a2f58a,PodSandboxId:b27aa50a26aea1055b171f58d7f7034d660c10d5fea3aa02de11cee94a89fc53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279869983313509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60704471-4b2f-4434-97fd-7b84419c8a24,},Annotations:map[string]string{io.kubernetes.container.hash: c60dbb11,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684d8e303d3ed944c2ab32a160cd75c2716a57aebd11538f28436bce0248a81,PodSandboxId:3cda19ea2b809b678c712d512c491429b6cebd1535c32cd820ab30d06aba1791,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869534091468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t4jjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d048fd15-d145-4fc9-8089-55972dfd052e,},Annotations:map[string]string{io.kubernetes.container.hash: 2db1f715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290e296b3fe0b8b28984f3dbede0d158c2d4ce5792e4722a4bfdb063b52b0278,PodSandboxId:b63077d49289f41b892ac67087f984c5b10386bbcb5180462101bf74696a6eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869284172958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vd7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e277e67a-e2b3-4c4c-a945-b46e419365c5,},Annotations:map[string]string{io.kubernetes.container.hash: 58f6aec4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a59b0de6efaa78f0541e0afbe618e250474f605d2f01a72c0fa968931ea9555,PodSandboxId:6752afc5607184af3635daa9d9defb5e3ed0f34f2db225b3a0ebcc74e7928883,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722279869008834295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9da3b5-0010-48fe-b349-6880fdd5404f,},Annotations:map[string]string{io.kubernetes.container.hash: 82601ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3903a83fea545f12af1e31df128b43986f1e4f43bcc4f64cc7850bf2c6fdd95,PodSandboxId:e914858338e32df94dc1457bab74e692c8c4eb0b68643d013725003e08a794fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722279849
469637536,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f951d43ab98b43cec81bd4e25c35a8b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dee6822734aba300b98d475602db7ed7f5ed88ad559dc847eb74fae4ff90c54,PodSandboxId:82515bbf9d419412d248b3215e2999e23f27450e34f192fc73adf227fc4e3e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Crea
tedAt:1722279849374721700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64814ead93f163f1c678d369cede4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd7f00e83588b3df64a5bf3c2a1fbd318a08b41926d3b4c4dad87656ca1bfc9,PodSandboxId:b470e0883ff059cfa22c9497b26ea020590a11a8d71eae1c8d41ff1ffaf5757b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Create
dAt:1722279849387772112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c761942700d84b93bcf0430e804257ea9c7618b8b0a62e64f28919db7f2c63fc,PodSandboxId:73dd6a50b62ae6f898d509ccf66bf4339b1030fcf981da0f8b98d91358c188a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17222798
49298410768,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36386695c0c8b1d1a591dbcf3cd8518f,},Annotations:map[string]string{io.kubernetes.container.hash: 700a48ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18090bf0aba3683f313f3a6464d1c6a5903552e91110ee4ada32637a0a180b5,PodSandboxId:269536abda6f94b4823bd59c9e9043f0ea3ed4750de50933a0222a570427059f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279559234895967,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f19fc58-be79-468d-abf4-701b6c66a9c7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.445037408Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df10338a-29db-4a55-9789-b468e380dd24 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.445125446Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df10338a-29db-4a55-9789-b468e380dd24 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.446472246Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36591c82-eb09-4573-b561-ae066a8854dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.446906535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280411446886391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36591c82-eb09-4573-b561-ae066a8854dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.447358420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b858f896-6112-414c-a23f-6f79e6ab9028 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.447427199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b858f896-6112-414c-a23f-6f79e6ab9028 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.447693454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c0d04e529681ad3b2d96d9fb1f982535ee7bc1ef1a06f73a71808947a2f58a,PodSandboxId:b27aa50a26aea1055b171f58d7f7034d660c10d5fea3aa02de11cee94a89fc53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279869983313509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60704471-4b2f-4434-97fd-7b84419c8a24,},Annotations:map[string]string{io.kubernetes.container.hash: c60dbb11,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684d8e303d3ed944c2ab32a160cd75c2716a57aebd11538f28436bce0248a81,PodSandboxId:3cda19ea2b809b678c712d512c491429b6cebd1535c32cd820ab30d06aba1791,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869534091468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t4jjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d048fd15-d145-4fc9-8089-55972dfd052e,},Annotations:map[string]string{io.kubernetes.container.hash: 2db1f715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290e296b3fe0b8b28984f3dbede0d158c2d4ce5792e4722a4bfdb063b52b0278,PodSandboxId:b63077d49289f41b892ac67087f984c5b10386bbcb5180462101bf74696a6eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869284172958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vd7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e277e67a-e2b3-4c4c-a945-b46e419365c5,},Annotations:map[string]string{io.kubernetes.container.hash: 58f6aec4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a59b0de6efaa78f0541e0afbe618e250474f605d2f01a72c0fa968931ea9555,PodSandboxId:6752afc5607184af3635daa9d9defb5e3ed0f34f2db225b3a0ebcc74e7928883,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722279869008834295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9da3b5-0010-48fe-b349-6880fdd5404f,},Annotations:map[string]string{io.kubernetes.container.hash: 82601ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3903a83fea545f12af1e31df128b43986f1e4f43bcc4f64cc7850bf2c6fdd95,PodSandboxId:e914858338e32df94dc1457bab74e692c8c4eb0b68643d013725003e08a794fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722279849
469637536,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f951d43ab98b43cec81bd4e25c35a8b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dee6822734aba300b98d475602db7ed7f5ed88ad559dc847eb74fae4ff90c54,PodSandboxId:82515bbf9d419412d248b3215e2999e23f27450e34f192fc73adf227fc4e3e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Crea
tedAt:1722279849374721700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64814ead93f163f1c678d369cede4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd7f00e83588b3df64a5bf3c2a1fbd318a08b41926d3b4c4dad87656ca1bfc9,PodSandboxId:b470e0883ff059cfa22c9497b26ea020590a11a8d71eae1c8d41ff1ffaf5757b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Create
dAt:1722279849387772112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c761942700d84b93bcf0430e804257ea9c7618b8b0a62e64f28919db7f2c63fc,PodSandboxId:73dd6a50b62ae6f898d509ccf66bf4339b1030fcf981da0f8b98d91358c188a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17222798
49298410768,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36386695c0c8b1d1a591dbcf3cd8518f,},Annotations:map[string]string{io.kubernetes.container.hash: 700a48ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18090bf0aba3683f313f3a6464d1c6a5903552e91110ee4ada32637a0a180b5,PodSandboxId:269536abda6f94b4823bd59c9e9043f0ea3ed4750de50933a0222a570427059f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279559234895967,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b858f896-6112-414c-a23f-6f79e6ab9028 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.486354902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e37a618b-9ebb-4e8e-8274-316421b269fd name=/runtime.v1.RuntimeService/Version
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.486444480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e37a618b-9ebb-4e8e-8274-316421b269fd name=/runtime.v1.RuntimeService/Version
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.487764335Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=626cf3f9-6e21-4cf2-a98f-d1d45aa7341c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.488174527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280411488152050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=626cf3f9-6e21-4cf2-a98f-d1d45aa7341c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.488816470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d37f5a53-ddf0-4136-8e1a-cc89b45db199 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.488891242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d37f5a53-ddf0-4136-8e1a-cc89b45db199 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:13:31 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:13:31.489186184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c0d04e529681ad3b2d96d9fb1f982535ee7bc1ef1a06f73a71808947a2f58a,PodSandboxId:b27aa50a26aea1055b171f58d7f7034d660c10d5fea3aa02de11cee94a89fc53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279869983313509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60704471-4b2f-4434-97fd-7b84419c8a24,},Annotations:map[string]string{io.kubernetes.container.hash: c60dbb11,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684d8e303d3ed944c2ab32a160cd75c2716a57aebd11538f28436bce0248a81,PodSandboxId:3cda19ea2b809b678c712d512c491429b6cebd1535c32cd820ab30d06aba1791,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869534091468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t4jjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d048fd15-d145-4fc9-8089-55972dfd052e,},Annotations:map[string]string{io.kubernetes.container.hash: 2db1f715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290e296b3fe0b8b28984f3dbede0d158c2d4ce5792e4722a4bfdb063b52b0278,PodSandboxId:b63077d49289f41b892ac67087f984c5b10386bbcb5180462101bf74696a6eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869284172958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vd7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e277e67a-e2b3-4c4c-a945-b46e419365c5,},Annotations:map[string]string{io.kubernetes.container.hash: 58f6aec4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a59b0de6efaa78f0541e0afbe618e250474f605d2f01a72c0fa968931ea9555,PodSandboxId:6752afc5607184af3635daa9d9defb5e3ed0f34f2db225b3a0ebcc74e7928883,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722279869008834295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9da3b5-0010-48fe-b349-6880fdd5404f,},Annotations:map[string]string{io.kubernetes.container.hash: 82601ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3903a83fea545f12af1e31df128b43986f1e4f43bcc4f64cc7850bf2c6fdd95,PodSandboxId:e914858338e32df94dc1457bab74e692c8c4eb0b68643d013725003e08a794fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722279849
469637536,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f951d43ab98b43cec81bd4e25c35a8b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dee6822734aba300b98d475602db7ed7f5ed88ad559dc847eb74fae4ff90c54,PodSandboxId:82515bbf9d419412d248b3215e2999e23f27450e34f192fc73adf227fc4e3e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Crea
tedAt:1722279849374721700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64814ead93f163f1c678d369cede4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd7f00e83588b3df64a5bf3c2a1fbd318a08b41926d3b4c4dad87656ca1bfc9,PodSandboxId:b470e0883ff059cfa22c9497b26ea020590a11a8d71eae1c8d41ff1ffaf5757b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Create
dAt:1722279849387772112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c761942700d84b93bcf0430e804257ea9c7618b8b0a62e64f28919db7f2c63fc,PodSandboxId:73dd6a50b62ae6f898d509ccf66bf4339b1030fcf981da0f8b98d91358c188a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17222798
49298410768,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36386695c0c8b1d1a591dbcf3cd8518f,},Annotations:map[string]string{io.kubernetes.container.hash: 700a48ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18090bf0aba3683f313f3a6464d1c6a5903552e91110ee4ada32637a0a180b5,PodSandboxId:269536abda6f94b4823bd59c9e9043f0ea3ed4750de50933a0222a570427059f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279559234895967,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d37f5a53-ddf0-4136-8e1a-cc89b45db199 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1c0d04e52968       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b27aa50a26aea       storage-provisioner
	9684d8e303d3e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3cda19ea2b809       coredns-7db6d8ff4d-t4jjm
	290e296b3fe0b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   b63077d49289f       coredns-7db6d8ff4d-vd7lb
	6a59b0de6efaa       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   6752afc560718       kube-proxy-2pgk2
	a3903a83fea54       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   e914858338e32       kube-controller-manager-default-k8s-diff-port-612270
	7cd7f00e83588       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   b470e0883ff05       kube-apiserver-default-k8s-diff-port-612270
	9dee6822734ab       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   82515bbf9d419       kube-scheduler-default-k8s-diff-port-612270
	c761942700d84       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   73dd6a50b62ae       etcd-default-k8s-diff-port-612270
	c18090bf0aba3       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   14 minutes ago      Exited              kube-apiserver            1                   269536abda6f9       kube-apiserver-default-k8s-diff-port-612270
	
	
	==> coredns [290e296b3fe0b8b28984f3dbede0d158c2d4ce5792e4722a4bfdb063b52b0278] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9684d8e303d3ed944c2ab32a160cd75c2716a57aebd11538f28436bce0248a81] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-612270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-612270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=default-k8s-diff-port-612270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_04_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:04:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-612270
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:13:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:09:41 +0000   Mon, 29 Jul 2024 19:04:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:09:41 +0000   Mon, 29 Jul 2024 19:04:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:09:41 +0000   Mon, 29 Jul 2024 19:04:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:09:41 +0000   Mon, 29 Jul 2024 19:04:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    default-k8s-diff-port-612270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f483107da7d464ca4baff73fe22ae90
	  System UUID:                4f483107-da7d-464c-a4ba-ff73fe22ae90
	  Boot ID:                    1625d9c3-7936-4519-a4ab-ca4b848415f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-t4jjm                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m3s
	  kube-system                 coredns-7db6d8ff4d-vd7lb                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m3s
	  kube-system                 etcd-default-k8s-diff-port-612270                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-612270             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-612270    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-2pgk2                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  kube-system                 kube-scheduler-default-k8s-diff-port-612270             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-569cc877fc-dfkzq                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m2s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m1s   kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node default-k8s-diff-port-612270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node default-k8s-diff-port-612270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node default-k8s-diff-port-612270 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m4s   node-controller  Node default-k8s-diff-port-612270 event: Registered Node default-k8s-diff-port-612270 in Controller
	
	
	==> dmesg <==
	[  +0.039314] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.736303] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul29 18:59] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.572887] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.765017] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.059376] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.048963] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.211828] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.119854] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.296714] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.286695] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.060638] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.090069] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +4.579977] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.342160] kauditd_printk_skb: 50 callbacks suppressed
	[  +8.492011] kauditd_printk_skb: 27 callbacks suppressed
	[Jul29 19:04] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.283564] systemd-fstab-generator[3604]: Ignoring "noauto" option for root device
	[  +4.334143] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.719478] systemd-fstab-generator[3925]: Ignoring "noauto" option for root device
	[ +13.413557] systemd-fstab-generator[4117]: Ignoring "noauto" option for root device
	[  +0.086050] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 19:05] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [c761942700d84b93bcf0430e804257ea9c7618b8b0a62e64f28919db7f2c63fc] <==
	{"level":"info","ts":"2024-07-29T19:04:10.490882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:04:10.490906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 received MsgVoteResp from 900c4b71f7b778f3 at term 2"}
	{"level":"info","ts":"2024-07-29T19:04:10.490933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T19:04:10.490966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 900c4b71f7b778f3 elected leader 900c4b71f7b778f3 at term 2"}
	{"level":"info","ts":"2024-07-29T19:04:10.495814Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"900c4b71f7b778f3","local-member-attributes":"{Name:default-k8s-diff-port-612270 ClientURLs:[https://192.168.39.152:2379]}","request-path":"/0/members/900c4b71f7b778f3/attributes","cluster-id":"ce072c4559d5992c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:04:10.495895Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:04:10.496254Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:04:10.497557Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:04:10.522573Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:04:10.52265Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:04:10.522957Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.152:2379"}
	{"level":"info","ts":"2024-07-29T19:04:10.523096Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ce072c4559d5992c","local-member-id":"900c4b71f7b778f3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:04:10.523184Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:04:10.523226Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:04:10.525148Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-29T19:12:19.850572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"963.975055ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8715469132556829442 > lease_revoke:<id:78f390ffe0ee82b9>","response":"size:29"}
	{"level":"info","ts":"2024-07-29T19:12:19.850783Z","caller":"traceutil/trace.go:171","msg":"trace[12790753] linearizableReadLoop","detail":"{readStateIndex:940; appliedIndex:939; }","duration":"1.119661062s","start":"2024-07-29T19:12:18.73108Z","end":"2024-07-29T19:12:19.850741Z","steps":["trace[12790753] 'read index received'  (duration: 155.343049ms)","trace[12790753] 'applied index is now lower than readState.Index'  (duration: 964.316943ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:12:19.85092Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.119803455s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.850936Z","caller":"traceutil/trace.go:171","msg":"trace[1495363539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:831; }","duration":"1.119874546s","start":"2024-07-29T19:12:18.731056Z","end":"2024-07-29T19:12:19.850931Z","steps":["trace[1495363539] 'agreement among raft nodes before linearized reading'  (duration: 1.119804237s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.850961Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:18.731043Z","time spent":"1.119907123s","remote":"127.0.0.1:47888","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-29T19:12:19.851155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"421.020826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.85146Z","caller":"traceutil/trace.go:171","msg":"trace[937393213] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:831; }","duration":"421.353864ms","start":"2024-07-29T19:12:19.430093Z","end":"2024-07-29T19:12:19.851447Z","steps":["trace[937393213] 'agreement among raft nodes before linearized reading'  (duration: 421.020953ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.851594Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:19.430079Z","time spent":"421.505209ms","remote":"127.0.0.1:48126","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":29,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-07-29T19:12:19.851208Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.14725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.851735Z","caller":"traceutil/trace.go:171","msg":"trace[1678992308] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:831; }","duration":"109.704771ms","start":"2024-07-29T19:12:19.742023Z","end":"2024-07-29T19:12:19.851728Z","steps":["trace[1678992308] 'agreement among raft nodes before linearized reading'  (duration: 109.160813ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:13:31 up 14 min,  0 users,  load average: 0.10, 0.24, 0.17
	Linux default-k8s-diff-port-612270 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7cd7f00e83588b3df64a5bf3c2a1fbd318a08b41926d3b4c4dad87656ca1bfc9] <==
	I0729 19:07:30.067769       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:09:11.972797       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:09:11.973113       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 19:09:12.973225       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:09:12.973277       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:09:12.973289       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:09:12.973231       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:09:12.973369       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:09:12.974387       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:10:12.973586       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:10:12.973865       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:10:12.973913       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:10:12.975604       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:10:12.975768       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:10:12.975798       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:12:12.974789       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:12:12.974929       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:12:12.974994       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:12:12.976025       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:12:12.976220       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:12:12.976262       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [c18090bf0aba3683f313f3a6464d1c6a5903552e91110ee4ada32637a0a180b5] <==
	W0729 19:04:05.496063       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.542692       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.580344       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.687403       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.728691       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.744044       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.768954       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.810605       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.870371       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.875025       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.914618       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.935630       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.956726       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.004775       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.017902       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.070975       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.079972       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.089969       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.093579       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.213331       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.288407       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.753251       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.753251       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.902542       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.939755       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a3903a83fea545f12af1e31df128b43986f1e4f43bcc4f64cc7850bf2c6fdd95] <==
	I0729 19:07:57.826634       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:08:27.403417       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:08:27.834562       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:08:57.408214       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:08:57.842742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:09:27.413638       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:09:27.851081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:09:57.418836       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:09:57.859014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:10:25.818051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="445.793µs"
	E0729 19:10:27.424269       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:10:27.866307       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:10:39.817168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="84.153µs"
	E0729 19:10:57.430148       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:10:57.873766       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:11:27.436356       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:11:27.881557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:11:57.441215       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:11:57.888451       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:12:27.447429       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:12:27.896812       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:12:57.452712       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:12:57.908177       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:13:27.458346       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:13:27.916565       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6a59b0de6efaa78f0541e0afbe618e250474f605d2f01a72c0fa968931ea9555] <==
	I0729 19:04:29.572466       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:04:29.602948       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	I0729 19:04:29.966301       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:04:29.973643       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:04:29.973713       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:04:30.013736       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:04:30.016875       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:04:30.019686       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:04:30.021125       1 config.go:192] "Starting service config controller"
	I0729 19:04:30.021655       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:04:30.021743       1 config.go:319] "Starting node config controller"
	I0729 19:04:30.021765       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:04:30.026088       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:04:30.026115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:04:30.122693       1 shared_informer.go:320] Caches are synced for node config
	I0729 19:04:30.122739       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:04:30.127205       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9dee6822734aba300b98d475602db7ed7f5ed88ad559dc847eb74fae4ff90c54] <==
	W0729 19:04:11.997363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 19:04:11.997391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 19:04:11.997442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:11.997467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 19:04:11.997575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:11.997646       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:04:11.998396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 19:04:11.998568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 19:04:12.827260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:04:12.827350       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 19:04:12.942388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:04:12.942434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 19:04:12.973695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:04:12.973743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 19:04:13.000824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:13.000884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 19:04:13.048352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 19:04:13.048403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 19:04:13.162171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:04:13.162282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 19:04:13.174181       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:13.174257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:04:13.426283       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:04:13.426330       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 19:04:15.876145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:11:14 default-k8s-diff-port-612270 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:11:14 default-k8s-diff-port-612270 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:11:14 default-k8s-diff-port-612270 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:11:14 default-k8s-diff-port-612270 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:11:24 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:11:24.801419    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:11:35 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:11:35.800114    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:11:46 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:11:46.802782    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:11:58 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:11:58.800967    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:12:11 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:12:11.800132    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:12:14 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:12:14.834631    3932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:12:14 default-k8s-diff-port-612270 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:12:14 default-k8s-diff-port-612270 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:12:14 default-k8s-diff-port-612270 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:12:14 default-k8s-diff-port-612270 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:12:22 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:12:22.803132    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:12:35 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:12:35.801764    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:12:47 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:12:47.800256    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:13:00 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:13:00.801265    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:13:13 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:13:13.799652    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:13:14 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:13:14.833604    3932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:13:14 default-k8s-diff-port-612270 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:13:14 default-k8s-diff-port-612270 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:13:14 default-k8s-diff-port-612270 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:13:14 default-k8s-diff-port-612270 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:13:28 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:13:28.801394    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	
	
	==> storage-provisioner [c1c0d04e529681ad3b2d96d9fb1f982535ee7bc1ef1a06f73a71808947a2f58a] <==
	I0729 19:04:30.117437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:04:30.135851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:04:30.135915       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:04:30.147577       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:04:30.147715       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-612270_7b018e85-001f-4428-9071-9f02eaeb6168!
	I0729 19:04:30.148643       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"39252ba2-999a-4c8d-a26c-b086676f7fa3", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-612270_7b018e85-001f-4428-9071-9f02eaeb6168 became leader
	I0729 19:04:30.248361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-612270_7b018e85-001f-4428-9071-9f02eaeb6168!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-612270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-dfkzq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-612270 describe pod metrics-server-569cc877fc-dfkzq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-612270 describe pod metrics-server-569cc877fc-dfkzq: exit status 1 (62.210833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-dfkzq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-612270 describe pod metrics-server-569cc877fc-dfkzq: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-368536 --alsologtostderr -v=3
E0729 19:04:53.656689   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 19:05:04.528044   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-368536 --alsologtostderr -v=3: exit status 82 (2m0.508401059s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-368536"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:04:37.469509  155693 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:04:37.469638  155693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:04:37.469647  155693 out.go:304] Setting ErrFile to fd 2...
	I0729 19:04:37.469653  155693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:04:37.469882  155693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 19:04:37.470136  155693 out.go:298] Setting JSON to false
	I0729 19:04:37.470227  155693 mustload.go:65] Loading cluster: embed-certs-368536
	I0729 19:04:37.470572  155693 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:04:37.470660  155693 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:04:37.470842  155693 mustload.go:65] Loading cluster: embed-certs-368536
	I0729 19:04:37.470977  155693 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:04:37.471026  155693 stop.go:39] StopHost: embed-certs-368536
	I0729 19:04:37.471442  155693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:04:37.471490  155693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:04:37.488210  155693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I0729 19:04:37.488661  155693 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:04:37.489278  155693 main.go:141] libmachine: Using API Version  1
	I0729 19:04:37.489302  155693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:04:37.489762  155693 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:04:37.492070  155693 out.go:177] * Stopping node "embed-certs-368536"  ...
	I0729 19:04:37.493406  155693 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 19:04:37.493446  155693 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:04:37.493695  155693 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 19:04:37.493718  155693 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:04:37.496366  155693 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:04:37.496807  155693 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:04:37.496848  155693 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:04:37.497043  155693 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:04:37.497209  155693 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:04:37.497390  155693 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:04:37.497575  155693 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:04:37.594326  155693 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 19:04:37.664581  155693 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 19:04:37.724393  155693 main.go:141] libmachine: Stopping "embed-certs-368536"...
	I0729 19:04:37.724436  155693 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:04:37.726112  155693 main.go:141] libmachine: (embed-certs-368536) Calling .Stop
	I0729 19:04:37.729598  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 0/120
	I0729 19:04:38.730963  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 1/120
	I0729 19:04:39.732314  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 2/120
	I0729 19:04:40.733742  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 3/120
	I0729 19:04:41.735243  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 4/120
	I0729 19:04:42.737287  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 5/120
	I0729 19:04:43.738808  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 6/120
	I0729 19:04:44.740173  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 7/120
	I0729 19:04:45.741658  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 8/120
	I0729 19:04:46.743440  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 9/120
	I0729 19:04:47.745649  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 10/120
	I0729 19:04:48.747456  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 11/120
	I0729 19:04:49.748722  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 12/120
	I0729 19:04:50.750068  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 13/120
	I0729 19:04:51.751711  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 14/120
	I0729 19:04:52.753193  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 15/120
	I0729 19:04:53.754530  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 16/120
	I0729 19:04:54.756163  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 17/120
	I0729 19:04:55.758132  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 18/120
	I0729 19:04:56.759520  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 19/120
	I0729 19:04:57.761385  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 20/120
	I0729 19:04:58.763769  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 21/120
	I0729 19:04:59.765516  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 22/120
	I0729 19:05:00.766644  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 23/120
	I0729 19:05:01.767922  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 24/120
	I0729 19:05:02.769995  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 25/120
	I0729 19:05:03.771255  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 26/120
	I0729 19:05:04.772563  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 27/120
	I0729 19:05:05.773877  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 28/120
	I0729 19:05:06.775343  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 29/120
	I0729 19:05:07.777271  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 30/120
	I0729 19:05:08.779322  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 31/120
	I0729 19:05:09.780619  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 32/120
	I0729 19:05:10.782748  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 33/120
	I0729 19:05:11.784156  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 34/120
	I0729 19:05:12.786505  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 35/120
	I0729 19:05:13.788116  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 36/120
	I0729 19:05:14.789970  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 37/120
	I0729 19:05:15.791216  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 38/120
	I0729 19:05:16.792712  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 39/120
	I0729 19:05:17.794600  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 40/120
	I0729 19:05:18.795917  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 41/120
	I0729 19:05:19.797391  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 42/120
	I0729 19:05:20.798590  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 43/120
	I0729 19:05:21.799894  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 44/120
	I0729 19:05:22.801678  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 45/120
	I0729 19:05:23.803137  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 46/120
	I0729 19:05:24.804459  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 47/120
	I0729 19:05:25.806058  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 48/120
	I0729 19:05:26.807675  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 49/120
	I0729 19:05:27.810003  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 50/120
	I0729 19:05:28.812357  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 51/120
	I0729 19:05:29.813717  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 52/120
	I0729 19:05:30.815294  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 53/120
	I0729 19:05:31.816589  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 54/120
	I0729 19:05:32.818566  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 55/120
	I0729 19:05:33.819882  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 56/120
	I0729 19:05:34.821131  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 57/120
	I0729 19:05:35.823322  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 58/120
	I0729 19:05:36.824622  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 59/120
	I0729 19:05:37.826615  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 60/120
	I0729 19:05:38.827971  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 61/120
	I0729 19:05:39.829759  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 62/120
	I0729 19:05:40.831302  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 63/120
	I0729 19:05:41.832914  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 64/120
	I0729 19:05:42.834967  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 65/120
	I0729 19:05:43.836312  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 66/120
	I0729 19:05:44.837650  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 67/120
	I0729 19:05:45.839748  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 68/120
	I0729 19:05:46.841268  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 69/120
	I0729 19:05:47.843413  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 70/120
	I0729 19:05:48.844906  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 71/120
	I0729 19:05:49.846189  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 72/120
	I0729 19:05:50.847424  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 73/120
	I0729 19:05:51.848611  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 74/120
	I0729 19:05:52.850651  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 75/120
	I0729 19:05:53.851920  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 76/120
	I0729 19:05:54.853238  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 77/120
	I0729 19:05:55.855427  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 78/120
	I0729 19:05:56.856716  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 79/120
	I0729 19:05:57.858847  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 80/120
	I0729 19:05:58.860199  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 81/120
	I0729 19:05:59.861749  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 82/120
	I0729 19:06:00.863918  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 83/120
	I0729 19:06:01.865280  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 84/120
	I0729 19:06:02.867150  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 85/120
	I0729 19:06:03.868619  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 86/120
	I0729 19:06:04.869914  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 87/120
	I0729 19:06:05.871316  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 88/120
	I0729 19:06:06.872838  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 89/120
	I0729 19:06:07.874918  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 90/120
	I0729 19:06:08.876428  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 91/120
	I0729 19:06:09.878064  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 92/120
	I0729 19:06:10.879640  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 93/120
	I0729 19:06:11.882047  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 94/120
	I0729 19:06:12.884060  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 95/120
	I0729 19:06:13.885514  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 96/120
	I0729 19:06:14.886885  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 97/120
	I0729 19:06:15.889086  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 98/120
	I0729 19:06:16.890387  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 99/120
	I0729 19:06:17.892345  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 100/120
	I0729 19:06:18.893714  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 101/120
	I0729 19:06:19.895283  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 102/120
	I0729 19:06:20.897027  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 103/120
	I0729 19:06:21.898424  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 104/120
	I0729 19:06:22.900358  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 105/120
	I0729 19:06:23.901837  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 106/120
	I0729 19:06:24.903556  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 107/120
	I0729 19:06:25.904820  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 108/120
	I0729 19:06:26.907182  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 109/120
	I0729 19:06:27.908973  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 110/120
	I0729 19:06:28.910166  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 111/120
	I0729 19:06:29.911357  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 112/120
	I0729 19:06:30.912610  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 113/120
	I0729 19:06:31.914129  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 114/120
	I0729 19:06:32.916285  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 115/120
	I0729 19:06:33.917578  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 116/120
	I0729 19:06:34.919405  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 117/120
	I0729 19:06:35.921004  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 118/120
	I0729 19:06:36.922451  155693 main.go:141] libmachine: (embed-certs-368536) Waiting for machine to stop 119/120
	I0729 19:06:37.923701  155693 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 19:06:37.923793  155693 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 19:06:37.925866  155693 out.go:177] 
	W0729 19:06:37.927301  155693 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 19:06:37.927321  155693 out.go:239] * 
	* 
	W0729 19:06:37.930964  155693 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:06:37.932293  155693 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-368536 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-368536 -n embed-certs-368536
E0729 19:06:39.106637   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-368536 -n embed-certs-368536: exit status 3 (18.522957817s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:06:56.457177  156204 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.95:22: connect: no route to host
	E0729 19:06:56.457199  156204 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.95:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-368536" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 19:05:47.656743   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 19:05:53.334134   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-524369 -n no-preload-524369
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 19:14:08.664968617 +0000 UTC m=+6058.635672389
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-524369 -n no-preload-524369
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-524369 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-524369 logs -n 25: (1.221862661s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-834964        | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-524369                  | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-524369 --memory=2200                     | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC | 29 Jul 24 19:05 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612270       | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 19:04 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-834964             | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-974855                              | cert-expiration-974855       | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:01 UTC |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-453780             | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-453780                  | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-453780 image list                           | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148539 | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | disable-driver-mounts-148539                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-368536            | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC | 29 Jul 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-368536                 | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:07:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:07:08.883432  156414 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:07:08.883769  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883781  156414 out.go:304] Setting ErrFile to fd 2...
	I0729 19:07:08.883788  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883976  156414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 19:07:08.884534  156414 out.go:298] Setting JSON to false
	I0729 19:07:08.885578  156414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13749,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:07:08.885642  156414 start.go:139] virtualization: kvm guest
	I0729 19:07:08.888023  156414 out.go:177] * [embed-certs-368536] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:07:08.889601  156414 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 19:07:08.889601  156414 notify.go:220] Checking for updates...
	I0729 19:07:08.892436  156414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:07:08.893966  156414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:07:08.895257  156414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 19:07:08.896516  156414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:07:08.897746  156414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:07:08.899225  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:07:08.899588  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.899642  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.914943  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0729 19:07:08.915307  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.915885  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.915905  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.916305  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.916486  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.916703  156414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:07:08.917034  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.917074  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.931497  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0729 19:07:08.931857  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.932280  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.932300  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.932640  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.932819  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.968386  156414 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:07:08.969538  156414 start.go:297] selected driver: kvm2
	I0729 19:07:08.969556  156414 start.go:901] validating driver "kvm2" against &{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.969681  156414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:07:08.970358  156414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.970428  156414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:07:08.986808  156414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:07:08.987271  156414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:07:08.987351  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:07:08.987370  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:07:08.987443  156414 start.go:340] cluster config:
	{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.987605  156414 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.989202  156414 out.go:177] * Starting "embed-certs-368536" primary control-plane node in "embed-certs-368536" cluster
	I0729 19:07:08.990452  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:07:08.990496  156414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:07:08.990510  156414 cache.go:56] Caching tarball of preloaded images
	I0729 19:07:08.990604  156414 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:07:08.990618  156414 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:07:08.990746  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:07:08.990949  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:07:08.990994  156414 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:07:08.991013  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:07:08.991023  156414 fix.go:54] fixHost starting: 
	I0729 19:07:08.991314  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.991356  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:09.006100  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 19:07:09.006507  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:09.007034  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:09.007060  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:09.007401  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:09.007594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.007758  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:07:09.009424  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Running err=<nil>
	W0729 19:07:09.009448  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:07:09.011240  156414 out.go:177] * Updating the running kvm2 "embed-certs-368536" VM ...
	I0729 19:07:09.012305  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:07:09.012324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.012506  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:07:09.014924  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015334  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:07:09.015362  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015507  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:07:09.015664  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015796  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:07:09.016106  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:07:09.016288  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:07:09.016300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:07:11.913157  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:14.985074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:21.065092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:24.137179  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:30.217186  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:33.289154  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:41.417004  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:41.417303  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417326  152077 kubeadm.go:310] 
	I0729 19:07:41.417370  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:07:41.417434  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:07:41.417456  152077 kubeadm.go:310] 
	I0729 19:07:41.417514  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:07:41.417586  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:07:41.417720  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:07:41.417732  152077 kubeadm.go:310] 
	I0729 19:07:41.417870  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:07:41.417917  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:07:41.417972  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:07:41.417982  152077 kubeadm.go:310] 
	I0729 19:07:41.418104  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:07:41.418232  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:07:41.418247  152077 kubeadm.go:310] 
	I0729 19:07:41.418370  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:07:41.418477  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:07:41.418596  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:07:41.418696  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:07:41.418706  152077 kubeadm.go:310] 
	I0729 19:07:41.419442  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:07:41.419562  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:07:41.419660  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:07:41.419767  152077 kubeadm.go:394] duration metric: took 8m3.273724985s to StartCluster
	I0729 19:07:41.419853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:07:41.419923  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:07:41.466895  152077 cri.go:89] found id: ""
	I0729 19:07:41.466922  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.466933  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:07:41.466941  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:07:41.467013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:07:41.500836  152077 cri.go:89] found id: ""
	I0729 19:07:41.500876  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.500888  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:07:41.500896  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:07:41.500949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:07:41.534917  152077 cri.go:89] found id: ""
	I0729 19:07:41.534946  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.534958  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:07:41.534968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:07:41.535038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:07:41.570516  152077 cri.go:89] found id: ""
	I0729 19:07:41.570545  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.570556  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:07:41.570565  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:07:41.570640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:07:41.608833  152077 cri.go:89] found id: ""
	I0729 19:07:41.608881  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.608894  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:07:41.608902  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:07:41.608969  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:07:41.644079  152077 cri.go:89] found id: ""
	I0729 19:07:41.644114  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.644127  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:07:41.644136  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:07:41.644198  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:07:41.678991  152077 cri.go:89] found id: ""
	I0729 19:07:41.679019  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.679028  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:07:41.679035  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:07:41.679088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:07:41.722775  152077 cri.go:89] found id: ""
	I0729 19:07:41.722803  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.722815  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:07:41.722829  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:07:41.722857  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:07:41.841614  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:07:41.841641  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:07:41.841658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:07:41.945679  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:07:41.945716  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:07:41.984505  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:07:41.984536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:07:42.037376  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:07:42.037418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:07:42.051259  152077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:07:42.051310  152077 out.go:239] * 
	W0729 19:07:42.051369  152077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.051390  152077 out.go:239] * 
	W0729 19:07:42.052280  152077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:07:42.055255  152077 out.go:177] 
	W0729 19:07:42.056299  152077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.056362  152077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:07:42.056391  152077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:07:42.057745  152077 out.go:177] 
	I0729 19:07:42.413268  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:45.481071  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:51.561079  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:54.633091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:00.713100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:03.785182  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:09.865116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:12.937102  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:19.017094  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:22.093191  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:28.169116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:31.241130  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:37.321134  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:40.393092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:46.473118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:49.545166  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:55.625086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:58.697184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:04.777113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:07.849165  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:13.933065  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:17.001100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:23.081133  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:26.153086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:32.233178  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:35.305183  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:41.385184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:44.461106  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:50.537120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:53.609150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:59.689091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:02.761193  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:08.845074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:11.917067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:17.993090  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:21.065137  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:27.145098  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:30.217175  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:36.301060  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:39.369118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:45.449082  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:48.521097  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:54.601120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:57.673200  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:03.753116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:06.825136  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:12.905195  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:15.977118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:22.057076  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:25.129144  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:31.209150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:34.281164  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:40.365067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:43.437113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:46.437452  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:11:46.437532  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.437865  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:11:46.437902  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.438117  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:11:46.439690  156414 machine.go:97] duration metric: took 4m37.427371174s to provisionDockerMachine
	I0729 19:11:46.439733  156414 fix.go:56] duration metric: took 4m37.448711854s for fixHost
	I0729 19:11:46.439746  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 4m37.448735558s
	W0729 19:11:46.439780  156414 start.go:714] error starting host: provision: host is not running
	W0729 19:11:46.439926  156414 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:11:46.439941  156414 start.go:729] Will try again in 5 seconds ...
	I0729 19:11:51.440155  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:11:51.440266  156414 start.go:364] duration metric: took 69.381µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:11:51.440311  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:11:51.440322  156414 fix.go:54] fixHost starting: 
	I0729 19:11:51.440735  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:11:51.440764  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:11:51.455959  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0729 19:11:51.456443  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:11:51.457053  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:11:51.457078  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:11:51.457475  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:11:51.457721  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:11:51.457885  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:11:51.459531  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Stopped err=<nil>
	I0729 19:11:51.459558  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	W0729 19:11:51.459736  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:11:51.461603  156414 out.go:177] * Restarting existing kvm2 VM for "embed-certs-368536" ...
	I0729 19:11:51.462768  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Start
	I0729 19:11:51.462973  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring networks are active...
	I0729 19:11:51.463679  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network default is active
	I0729 19:11:51.464155  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network mk-embed-certs-368536 is active
	I0729 19:11:51.464571  156414 main.go:141] libmachine: (embed-certs-368536) Getting domain xml...
	I0729 19:11:51.465359  156414 main.go:141] libmachine: (embed-certs-368536) Creating domain...
	I0729 19:11:51.794918  156414 main.go:141] libmachine: (embed-certs-368536) Waiting to get IP...
	I0729 19:11:51.795679  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:51.796159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:51.796221  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:51.796150  157493 retry.go:31] will retry after 235.729444ms: waiting for machine to come up
	I0729 19:11:52.033554  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.034180  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.034207  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.034103  157493 retry.go:31] will retry after 323.595446ms: waiting for machine to come up
	I0729 19:11:52.359640  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.360204  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.360233  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.360143  157493 retry.go:31] will retry after 341.954873ms: waiting for machine to come up
	I0729 19:11:52.703779  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.704350  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.704373  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.704298  157493 retry.go:31] will retry after 385.738264ms: waiting for machine to come up
	I0729 19:11:53.091976  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.092451  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.092484  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.092397  157493 retry.go:31] will retry after 759.811264ms: waiting for machine to come up
	I0729 19:11:53.853241  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.853799  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.853826  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.853755  157493 retry.go:31] will retry after 761.294244ms: waiting for machine to come up
	I0729 19:11:54.616298  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:54.617009  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:54.617039  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:54.616952  157493 retry.go:31] will retry after 1.056936741s: waiting for machine to come up
	I0729 19:11:55.675047  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:55.675491  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:55.675514  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:55.675442  157493 retry.go:31] will retry after 1.232760679s: waiting for machine to come up
	I0729 19:11:56.909745  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:56.910283  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:56.910309  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:56.910228  157493 retry.go:31] will retry after 1.432617964s: waiting for machine to come up
	I0729 19:11:58.344399  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:58.345006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:58.345033  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:58.344957  157493 retry.go:31] will retry after 1.914060621s: waiting for machine to come up
	I0729 19:12:00.262146  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:00.262707  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:00.262739  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:00.262638  157493 retry.go:31] will retry after 2.77447957s: waiting for machine to come up
	I0729 19:12:03.039059  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:03.039693  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:03.039718  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:03.039650  157493 retry.go:31] will retry after 2.755354142s: waiting for machine to come up
	I0729 19:12:05.797890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:05.798279  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:05.798306  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:05.798234  157493 retry.go:31] will retry after 4.501451096s: waiting for machine to come up
	I0729 19:12:10.304116  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304586  156414 main.go:141] libmachine: (embed-certs-368536) Found IP for machine: 192.168.50.95
	I0729 19:12:10.304616  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has current primary IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304626  156414 main.go:141] libmachine: (embed-certs-368536) Reserving static IP address...
	I0729 19:12:10.305096  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.305134  156414 main.go:141] libmachine: (embed-certs-368536) DBG | skip adding static IP to network mk-embed-certs-368536 - found existing host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"}
	I0729 19:12:10.305151  156414 main.go:141] libmachine: (embed-certs-368536) Reserved static IP address: 192.168.50.95
	I0729 19:12:10.305166  156414 main.go:141] libmachine: (embed-certs-368536) Waiting for SSH to be available...
	I0729 19:12:10.305184  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Getting to WaitForSSH function...
	I0729 19:12:10.307568  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.307936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.307972  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.308079  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH client type: external
	I0729 19:12:10.308110  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa (-rw-------)
	I0729 19:12:10.308140  156414 main.go:141] libmachine: (embed-certs-368536) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:12:10.308157  156414 main.go:141] libmachine: (embed-certs-368536) DBG | About to run SSH command:
	I0729 19:12:10.308170  156414 main.go:141] libmachine: (embed-certs-368536) DBG | exit 0
	I0729 19:12:10.436958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | SSH cmd err, output: <nil>: 
	I0729 19:12:10.437313  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetConfigRaw
	I0729 19:12:10.437962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.440164  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440520  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.440545  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440930  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:12:10.441184  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:12:10.441207  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:10.441430  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.443802  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444155  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.444187  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444375  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.444541  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444719  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444897  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.445064  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.445289  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.445300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:12:10.557371  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:12:10.557405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557675  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:12:10.557706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557905  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.560444  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.560793  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.560819  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.561027  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.561249  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561442  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.561783  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.561990  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.562006  156414 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-368536 && echo "embed-certs-368536" | sudo tee /etc/hostname
	I0729 19:12:10.688844  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-368536
	
	I0729 19:12:10.688891  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.691701  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.692102  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692293  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.692468  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692660  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692821  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.693024  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.693222  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.693245  156414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-368536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-368536/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-368536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:12:10.814346  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:12:10.814376  156414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 19:12:10.814400  156414 buildroot.go:174] setting up certificates
	I0729 19:12:10.814409  156414 provision.go:84] configureAuth start
	I0729 19:12:10.814417  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.814706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.817503  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.817859  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.817886  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.818025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.820077  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820443  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.820473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820617  156414 provision.go:143] copyHostCerts
	I0729 19:12:10.820703  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 19:12:10.820713  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 19:12:10.820781  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 19:12:10.820905  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 19:12:10.820914  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 19:12:10.820943  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 19:12:10.821005  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 19:12:10.821013  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 19:12:10.821041  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 19:12:10.821115  156414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.embed-certs-368536 san=[127.0.0.1 192.168.50.95 embed-certs-368536 localhost minikube]
	I0729 19:12:10.867595  156414 provision.go:177] copyRemoteCerts
	I0729 19:12:10.867661  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:12:10.867685  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.870501  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.870857  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.870881  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.871063  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.871267  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.871428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.871595  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:10.956269  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 19:12:10.981554  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:12:11.006055  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:12:11.031709  156414 provision.go:87] duration metric: took 217.286178ms to configureAuth
	I0729 19:12:11.031746  156414 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:12:11.031924  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:12:11.032088  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.034772  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035129  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.035159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.035583  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035737  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035863  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.036016  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.036203  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.036218  156414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:12:11.321782  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:12:11.321810  156414 machine.go:97] duration metric: took 880.608535ms to provisionDockerMachine
	I0729 19:12:11.321823  156414 start.go:293] postStartSetup for "embed-certs-368536" (driver="kvm2")
	I0729 19:12:11.321834  156414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:12:11.321850  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.322243  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:12:11.322281  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.325081  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325437  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.325459  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325612  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.325841  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.326025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.326177  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.412548  156414 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:12:11.417360  156414 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:12:11.417386  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 19:12:11.417466  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 19:12:11.417538  156414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 19:12:11.417632  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:12:11.429176  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:11.457333  156414 start.go:296] duration metric: took 135.490005ms for postStartSetup
	I0729 19:12:11.457401  156414 fix.go:56] duration metric: took 20.017075153s for fixHost
	I0729 19:12:11.457428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.460243  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460653  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.460702  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460873  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.461060  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461233  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461408  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.461586  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.461763  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.461779  156414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:12:11.573697  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722280331.531987438
	
	I0729 19:12:11.573724  156414 fix.go:216] guest clock: 1722280331.531987438
	I0729 19:12:11.573735  156414 fix.go:229] Guest: 2024-07-29 19:12:11.531987438 +0000 UTC Remote: 2024-07-29 19:12:11.457406225 +0000 UTC m=+302.608153452 (delta=74.581213ms)
	I0729 19:12:11.573758  156414 fix.go:200] guest clock delta is within tolerance: 74.581213ms
	I0729 19:12:11.573763  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 20.133485011s
	I0729 19:12:11.573782  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.574056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:11.576405  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576760  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.576798  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576988  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577479  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577644  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577737  156414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:12:11.577799  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.577840  156414 ssh_runner.go:195] Run: cat /version.json
	I0729 19:12:11.577869  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.580473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580564  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580840  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580921  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.581056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581203  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581358  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581369  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581554  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581564  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581669  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.581732  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.662254  156414 ssh_runner.go:195] Run: systemctl --version
	I0729 19:12:11.688214  156414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:12:11.838322  156414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:12:11.844435  156414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:12:11.844521  156414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:12:11.859899  156414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:12:11.859923  156414 start.go:495] detecting cgroup driver to use...
	I0729 19:12:11.859990  156414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:12:11.876768  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:12:11.890508  156414 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:12:11.890584  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:12:11.904817  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:12:11.919251  156414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:12:12.053205  156414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:12:12.205975  156414 docker.go:233] disabling docker service ...
	I0729 19:12:12.206041  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:12:12.222129  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:12:12.235288  156414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:12:12.386940  156414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:12:12.503688  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:12:12.518539  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:12:12.538744  156414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:12:12.538805  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.549615  156414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:12:12.549683  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.560565  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.571814  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.582852  156414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:12:12.595237  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.607232  156414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.626183  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.637202  156414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:12:12.647141  156414 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:12:12.647204  156414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:12:12.661099  156414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:12:12.671936  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:12.795418  156414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:12:12.934125  156414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:12:12.934220  156414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:12:12.939058  156414 start.go:563] Will wait 60s for crictl version
	I0729 19:12:12.939123  156414 ssh_runner.go:195] Run: which crictl
	I0729 19:12:12.943972  156414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:12:12.990564  156414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:12:12.990672  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.018852  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.053593  156414 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:12:13.054890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:13.057601  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.057994  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:13.058025  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.058229  156414 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:12:13.062303  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:13.075815  156414 kubeadm.go:883] updating cluster {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:12:13.075989  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:12:13.076042  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:13.112073  156414 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:12:13.112136  156414 ssh_runner.go:195] Run: which lz4
	I0729 19:12:13.116209  156414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:12:13.120509  156414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:12:13.120545  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:12:14.521429  156414 crio.go:462] duration metric: took 1.405252765s to copy over tarball
	I0729 19:12:14.521516  156414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:12:16.740086  156414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218529852s)
	I0729 19:12:16.740129  156414 crio.go:469] duration metric: took 2.218665992s to extract the tarball
	I0729 19:12:16.740140  156414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:12:16.779302  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:16.825787  156414 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:12:16.825823  156414 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:12:16.825832  156414 kubeadm.go:934] updating node { 192.168.50.95 8443 v1.30.3 crio true true} ...
	I0729 19:12:16.825972  156414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-368536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:12:16.826034  156414 ssh_runner.go:195] Run: crio config
	I0729 19:12:16.873701  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:16.873734  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:16.873752  156414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:12:16.873776  156414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-368536 NodeName:embed-certs-368536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:12:16.873923  156414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-368536"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:12:16.873987  156414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:12:16.884427  156414 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:12:16.884530  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:12:16.895097  156414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 19:12:16.914112  156414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:12:16.931797  156414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 19:12:16.949824  156414 ssh_runner.go:195] Run: grep 192.168.50.95	control-plane.minikube.internal$ /etc/hosts
	I0729 19:12:16.953765  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:16.967138  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:17.088789  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:12:17.108577  156414 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536 for IP: 192.168.50.95
	I0729 19:12:17.108601  156414 certs.go:194] generating shared ca certs ...
	I0729 19:12:17.108623  156414 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:12:17.108831  156414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 19:12:17.109076  156414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 19:12:17.109118  156414 certs.go:256] generating profile certs ...
	I0729 19:12:17.109296  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/client.key
	I0729 19:12:17.109394  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key.77ca8755
	I0729 19:12:17.109448  156414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key
	I0729 19:12:17.109619  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 19:12:17.109651  156414 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 19:12:17.109661  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 19:12:17.109688  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 19:12:17.109722  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:12:17.109756  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 19:12:17.109819  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:17.110926  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:12:17.142003  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 19:12:17.176798  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:12:17.204272  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 19:12:17.244558  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:12:17.281935  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:12:17.306456  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:12:17.332576  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:12:17.357372  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 19:12:17.380363  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 19:12:17.403737  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:12:17.428493  156414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:12:17.445767  156414 ssh_runner.go:195] Run: openssl version
	I0729 19:12:17.451514  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 19:12:17.463663  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469467  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469523  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.475754  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:12:17.487617  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:12:17.498699  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503205  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503257  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.508880  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:12:17.519570  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 19:12:17.530630  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535004  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535046  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.540647  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 19:12:17.551719  156414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:12:17.556199  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:12:17.562469  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:12:17.568561  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:12:17.574653  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:12:17.580458  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:12:17.586392  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:12:17.592304  156414 kubeadm.go:392] StartCluster: {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:12:17.592422  156414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:12:17.592467  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.631628  156414 cri.go:89] found id: ""
	I0729 19:12:17.631701  156414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:12:17.642636  156414 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:12:17.642665  156414 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:12:17.642731  156414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:12:17.652551  156414 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:12:17.653603  156414 kubeconfig.go:125] found "embed-certs-368536" server: "https://192.168.50.95:8443"
	I0729 19:12:17.656022  156414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:12:17.666987  156414 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.95
	I0729 19:12:17.667014  156414 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:12:17.667026  156414 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:12:17.667065  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.709534  156414 cri.go:89] found id: ""
	I0729 19:12:17.709598  156414 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:12:17.729709  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:12:17.739968  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:12:17.739990  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:12:17.740051  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:12:17.749727  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:12:17.749794  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:12:17.760013  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:12:17.781070  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:12:17.781135  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:12:17.794747  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.805862  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:12:17.805938  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.815892  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:12:17.825005  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:12:17.825072  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:12:17.834745  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:12:17.844191  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:17.963586  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.325254  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.361626819s)
	I0729 19:12:19.325289  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.537565  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.611697  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.710291  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:12:19.710408  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.210809  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.711234  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.727361  156414 api_server.go:72] duration metric: took 1.017067714s to wait for apiserver process to appear ...
	I0729 19:12:20.727396  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:12:20.727432  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:20.727961  156414 api_server.go:269] stopped: https://192.168.50.95:8443/healthz: Get "https://192.168.50.95:8443/healthz": dial tcp 192.168.50.95:8443: connect: connection refused
	I0729 19:12:21.228408  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.622048  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.622093  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.622106  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.628174  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.628195  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.728403  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.732920  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:23.732969  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.227954  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.233109  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.233143  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.727641  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.735127  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.735156  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.227686  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.236914  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.236947  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.728500  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.735276  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.735307  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.227824  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.232439  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.232471  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.728521  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.732952  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.732989  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:27.227504  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:27.232166  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:12:27.238505  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:12:27.238531  156414 api_server.go:131] duration metric: took 6.511128129s to wait for apiserver health ...
	I0729 19:12:27.238541  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:27.238560  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:27.240943  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:12:27.242526  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:12:27.254578  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:12:27.274700  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:12:27.289359  156414 system_pods.go:59] 8 kube-system pods found
	I0729 19:12:27.289412  156414 system_pods.go:61] "coredns-7db6d8ff4d-dww2j" [31418aeb-98be-4b43-a687-5c8f64ec5e9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:12:27.289424  156414 system_pods.go:61] "etcd-embed-certs-368536" [0a7aac00-9161-473a-87f1-2702211beaac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:12:27.289443  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [d6638a87-43df-457e-b335-1ab2aa03f421] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:12:27.289469  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [76fefc23-7794-40fc-845a-73d83eb55450] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:12:27.289478  156414 system_pods.go:61] "kube-proxy-9xwt2" [f637605b-9bc2-4922-b801-04f681a81e7c] Running
	I0729 19:12:27.289485  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [476e1d42-6610-44b5-b77b-b25537f044eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:12:27.289495  156414 system_pods.go:61] "metrics-server-569cc877fc-xnkwq" [19328b03-8c7d-499f-9a30-31f023605e49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:12:27.289500  156414 system_pods.go:61] "storage-provisioner" [b049d065-fbb1-48b6-a377-2938a0519c78] Running
	I0729 19:12:27.289513  156414 system_pods.go:74] duration metric: took 14.789212ms to wait for pod list to return data ...
	I0729 19:12:27.289525  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:12:27.292692  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:12:27.292716  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:12:27.292809  156414 node_conditions.go:105] duration metric: took 3.278515ms to run NodePressure ...
	I0729 19:12:27.292825  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:27.563167  156414 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567532  156414 kubeadm.go:739] kubelet initialised
	I0729 19:12:27.567552  156414 kubeadm.go:740] duration metric: took 4.363687ms waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567566  156414 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:12:27.573549  156414 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:29.580511  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:32.080352  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:34.079542  156414 pod_ready.go:92] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:34.079564  156414 pod_ready.go:81] duration metric: took 6.505994438s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:34.079574  156414 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:36.086785  156414 pod_ready.go:102] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:37.089587  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.089611  156414 pod_ready.go:81] duration metric: took 3.010030448s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.089621  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093814  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.093833  156414 pod_ready.go:81] duration metric: took 4.206508ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093842  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097603  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.097621  156414 pod_ready.go:81] duration metric: took 3.772149ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097633  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101696  156414 pod_ready.go:92] pod "kube-proxy-9xwt2" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.101720  156414 pod_ready.go:81] duration metric: took 4.078653ms for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101732  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107711  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.107728  156414 pod_ready.go:81] duration metric: took 5.989066ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107738  156414 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:39.113796  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:41.115401  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:43.614461  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:45.614900  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:48.113738  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:50.114364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:52.114790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:54.613472  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:56.613889  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:59.114206  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:01.614516  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:04.114859  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:06.114993  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:08.615213  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:11.114015  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:13.114342  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:15.614423  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:18.114442  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:20.614465  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:23.117909  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:25.614155  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:28.114746  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:30.613689  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:32.614790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:35.113716  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:37.116362  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:39.614545  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:42.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:44.114654  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:46.114765  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:48.615523  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:51.114096  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:53.114180  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:55.614278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:58.114138  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:00.613679  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:02.614878  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:05.117278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:07.614025  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.328577051Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:401d89d64492aacbf7f3a9be7bb29b22e09fe57410a3a90c0d30ea6acb1c67d4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4c9ecca1-222d-423a-a4a8-617b3b5dceaf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722279900199378108,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c9ecca1-222d-423a-a4a8-617b3b5dceaf,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T19:04:59.890894401Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b1cb0c7b6acaca28e3db58c6032e4e811778b4ed92cdf2d3fa0da933236db10,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-l6hjr,Uid:285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722279900004182388,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-l6hjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b
,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:04:59.694192541Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40866c15eaf8a6a5e4f9936d43e712088192d4dfd4ed7502d7614e27d52dc512,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-w7ptq,Uid:5d9df116-aead-4d87-ade9-397d402c6a9b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722279898978281755,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-w7ptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9df116-aead-4d87-ade9-397d402c6a9b,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:04:58.649013228Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fab013325b34608f64e4ea772c3b59e00e46345163f35e2d1517f5cba538bf5,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-sqjsh,Uid:5afcfe5e-4f63-47fc-
a382-d2485c80fd87,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722279898969265257,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-sqjsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afcfe5e-4f63-47fc-a382-d2485c80fd87,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:04:58.655893793Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2078f2612db573faa57e329b4061a9f9c54a65348237c13eb76ef7585cdb5886,Metadata:&PodSandboxMetadata{Name:kube-proxy-fzrdv,Uid:047bc0eb-0615-4a77-a835-99a264b0b5cf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722279898472288121,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fzrdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047bc0eb-0615-4a77-a835-99a264b0b5cf,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:04:58.157668635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01fb15d87a6fd9f15dadac3916ec96d1101ffd640b5d737afa885cd788903915,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-524369,Uid:c8f315ab615ee060c6dba20ec59c10b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722279887382466245,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f315ab615ee060c6dba20ec59c10b9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c8f315ab615ee060c6dba20ec59c10b9,kubernetes.io/config.seen: 2024-07-29T19:04:46.933355365Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c2158c077b71d9cb1ecf5b2a581ed85e76fc457a84c3d5c0c331fef9f20f319,Metadata:&PodSandboxMetadata{Name:kube-controller-m
anager-no-preload-524369,Uid:2fd43d18d99a26b96f54dd633497cef2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722279887374940427,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd43d18d99a26b96f54dd633497cef2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2fd43d18d99a26b96f54dd633497cef2,kubernetes.io/config.seen: 2024-07-29T19:04:46.933354086Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c1b2379aa4108f6d02bf5d9bb8dfe8f16a7005172004cdb6a2fc4f529ce9ca1a,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-524369,Uid:397c92beaad6c72a3c97d4dc7f6d1bd4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722279887369062000,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-524369,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 397c92beaad6c72a3c97d4dc7f6d1bd4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.7:2379,kubernetes.io/config.hash: 397c92beaad6c72a3c97d4dc7f6d1bd4,kubernetes.io/config.seen: 2024-07-29T19:04:46.933348871Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eb424fbb490db28bef3d8dbf82d410d397cfd402eb3faac07ba3a174221a620e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-524369,Uid:55ad145f5a950f7c5aa599aef2bca250,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722279887368720429,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.7:8443,ku
bernetes.io/config.hash: 55ad145f5a950f7c5aa599aef2bca250,kubernetes.io/config.seen: 2024-07-29T19:04:46.933352615Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:08b1ebb8c2b21bd270b8d1b41c670ee096631918c3e48940cf706bc719221c11,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-524369,Uid:55ad145f5a950f7c5aa599aef2bca250,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722279607935288000,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.7:8443,kubernetes.io/config.hash: 55ad145f5a950f7c5aa599aef2bca250,kubernetes.io/config.seen: 2024-07-29T19:00:07.390053463Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptor
s.go:74" id=2a52a3de-eb71-4873-8f5f-8eb63c75acfe name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.329252784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a75a9768-8e0c-438b-b1cd-c5410e6be314 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.329334729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a75a9768-8e0c-438b-b1cd-c5410e6be314 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.329515850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ca77d323edde3a471cc09829107e4b60723540d97038b6563f3515f5632480,PodSandboxId:401d89d64492aacbf7f3a9be7bb29b22e09fe57410a3a90c0d30ea6acb1c67d4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279900306681926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c9ecca1-222d-423a-a4a8-617b3b5dceaf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05dd5140f888cf33e8da2f08c25b80769c5b124f7e0d009eda79f69f0198a964,PodSandboxId:1fab013325b34608f64e4ea772c3b59e00e46345163f35e2d1517f5cba538bf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899738581731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sqjsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afcfe5e-4f63-47fc-a382-d2485c80fd87,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2c6bd4e858e35e487a738492bd7390c9318baf1dbe8f3e7fb1be5aba587f35,PodSandboxId:40866c15eaf8a6a5e4f9936d43e712088192d4dfd4ed7502d7614e27d52dc512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899807179379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w7ptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
9df116-aead-4d87-ade9-397d402c6a9b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf1d196ad19dcb91c3e74f025026a283ee2d7119044448e7baeffdf7d3e36bd,PodSandboxId:2078f2612db573faa57e329b4061a9f9c54a65348237c13eb76ef7585cdb5886,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722279898613706363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzrdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047bc0eb-0615-4a77-a835-99a264b0b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774b6f05ee3602d6336ae397ef0e99a0c2efdb0e8e875055fb477f14ad7c18d9,PodSandboxId:01fb15d87a6fd9f15dadac3916ec96d1101ffd640b5d737afa885cd788903915,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722279887600009834,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f315ab615ee060c6dba20ec59c10b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565b7d4870cc8f097db55c6187e1917737e3e89601c40ad729ce4bcc12cef063,PodSandboxId:c1b2379aa4108f6d02bf5d9bb8dfe8f16a7005172004cdb6a2fc4f529ce9ca1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722279887650207443,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c92beaad6c72a3c97d4dc7f6d1bd4,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db5c6899215ea5671d54ffdbef718a3ed7472a7e909b713e0cd3e3924b9aff5a,PodSandboxId:2c2158c077b71d9cb1ecf5b2a581ed85e76fc457a84c3d5c0c331fef9f20f319,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722279887611204045,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd43d18d99a26b96f54dd633497cef2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0123ed63d3bb67b7ed32c2266ea6014d1f31cd96ef54a155fa5f07642548a9f,PodSandboxId:eb424fbb490db28bef3d8dbf82d410d397cfd402eb3faac07ba3a174221a620e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722279887524150874,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d7fa5f82e2cfb7f4316f748e51228a62bcf15c90cc7a6f2f50c04e84fc6cfb,PodSandboxId:08b1ebb8c2b21bd270b8d1b41c670ee096631918c3e48940cf706bc719221c11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722279608178041451,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a75a9768-8e0c-438b-b1cd-c5410e6be314 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.331811765Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=819cd43c-bb84-4ed9-8cef-e48fe7fcc651 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.331881461Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=819cd43c-bb84-4ed9-8cef-e48fe7fcc651 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.333152746Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ecdb2b5-c6ee-4894-a32c-21d9887fe441 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.333518929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280449333497103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ecdb2b5-c6ee-4894-a32c-21d9887fe441 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.334164883Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e4fe0b2-72de-4656-884e-6a71bcffc94c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.334440091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e4fe0b2-72de-4656-884e-6a71bcffc94c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.334902398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ca77d323edde3a471cc09829107e4b60723540d97038b6563f3515f5632480,PodSandboxId:401d89d64492aacbf7f3a9be7bb29b22e09fe57410a3a90c0d30ea6acb1c67d4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279900306681926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c9ecca1-222d-423a-a4a8-617b3b5dceaf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05dd5140f888cf33e8da2f08c25b80769c5b124f7e0d009eda79f69f0198a964,PodSandboxId:1fab013325b34608f64e4ea772c3b59e00e46345163f35e2d1517f5cba538bf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899738581731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sqjsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afcfe5e-4f63-47fc-a382-d2485c80fd87,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2c6bd4e858e35e487a738492bd7390c9318baf1dbe8f3e7fb1be5aba587f35,PodSandboxId:40866c15eaf8a6a5e4f9936d43e712088192d4dfd4ed7502d7614e27d52dc512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899807179379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w7ptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
9df116-aead-4d87-ade9-397d402c6a9b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf1d196ad19dcb91c3e74f025026a283ee2d7119044448e7baeffdf7d3e36bd,PodSandboxId:2078f2612db573faa57e329b4061a9f9c54a65348237c13eb76ef7585cdb5886,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722279898613706363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzrdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047bc0eb-0615-4a77-a835-99a264b0b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774b6f05ee3602d6336ae397ef0e99a0c2efdb0e8e875055fb477f14ad7c18d9,PodSandboxId:01fb15d87a6fd9f15dadac3916ec96d1101ffd640b5d737afa885cd788903915,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722279887600009834,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f315ab615ee060c6dba20ec59c10b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565b7d4870cc8f097db55c6187e1917737e3e89601c40ad729ce4bcc12cef063,PodSandboxId:c1b2379aa4108f6d02bf5d9bb8dfe8f16a7005172004cdb6a2fc4f529ce9ca1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722279887650207443,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c92beaad6c72a3c97d4dc7f6d1bd4,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db5c6899215ea5671d54ffdbef718a3ed7472a7e909b713e0cd3e3924b9aff5a,PodSandboxId:2c2158c077b71d9cb1ecf5b2a581ed85e76fc457a84c3d5c0c331fef9f20f319,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722279887611204045,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd43d18d99a26b96f54dd633497cef2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0123ed63d3bb67b7ed32c2266ea6014d1f31cd96ef54a155fa5f07642548a9f,PodSandboxId:eb424fbb490db28bef3d8dbf82d410d397cfd402eb3faac07ba3a174221a620e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722279887524150874,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d7fa5f82e2cfb7f4316f748e51228a62bcf15c90cc7a6f2f50c04e84fc6cfb,PodSandboxId:08b1ebb8c2b21bd270b8d1b41c670ee096631918c3e48940cf706bc719221c11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722279608178041451,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e4fe0b2-72de-4656-884e-6a71bcffc94c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.369707956Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e63e8504-bb24-4025-97ab-720eafa007f0 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.369782296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e63e8504-bb24-4025-97ab-720eafa007f0 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.371043313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91d1d83c-6a22-4453-aab9-e0e49b1c73b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.371403732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280449371380002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91d1d83c-6a22-4453-aab9-e0e49b1c73b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.371888177Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c2057da-0dea-4d82-8261-28ef608998a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.371958954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c2057da-0dea-4d82-8261-28ef608998a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.372161218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ca77d323edde3a471cc09829107e4b60723540d97038b6563f3515f5632480,PodSandboxId:401d89d64492aacbf7f3a9be7bb29b22e09fe57410a3a90c0d30ea6acb1c67d4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279900306681926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c9ecca1-222d-423a-a4a8-617b3b5dceaf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05dd5140f888cf33e8da2f08c25b80769c5b124f7e0d009eda79f69f0198a964,PodSandboxId:1fab013325b34608f64e4ea772c3b59e00e46345163f35e2d1517f5cba538bf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899738581731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sqjsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afcfe5e-4f63-47fc-a382-d2485c80fd87,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2c6bd4e858e35e487a738492bd7390c9318baf1dbe8f3e7fb1be5aba587f35,PodSandboxId:40866c15eaf8a6a5e4f9936d43e712088192d4dfd4ed7502d7614e27d52dc512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899807179379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w7ptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
9df116-aead-4d87-ade9-397d402c6a9b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf1d196ad19dcb91c3e74f025026a283ee2d7119044448e7baeffdf7d3e36bd,PodSandboxId:2078f2612db573faa57e329b4061a9f9c54a65348237c13eb76ef7585cdb5886,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722279898613706363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzrdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047bc0eb-0615-4a77-a835-99a264b0b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774b6f05ee3602d6336ae397ef0e99a0c2efdb0e8e875055fb477f14ad7c18d9,PodSandboxId:01fb15d87a6fd9f15dadac3916ec96d1101ffd640b5d737afa885cd788903915,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722279887600009834,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f315ab615ee060c6dba20ec59c10b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565b7d4870cc8f097db55c6187e1917737e3e89601c40ad729ce4bcc12cef063,PodSandboxId:c1b2379aa4108f6d02bf5d9bb8dfe8f16a7005172004cdb6a2fc4f529ce9ca1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722279887650207443,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c92beaad6c72a3c97d4dc7f6d1bd4,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db5c6899215ea5671d54ffdbef718a3ed7472a7e909b713e0cd3e3924b9aff5a,PodSandboxId:2c2158c077b71d9cb1ecf5b2a581ed85e76fc457a84c3d5c0c331fef9f20f319,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722279887611204045,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd43d18d99a26b96f54dd633497cef2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0123ed63d3bb67b7ed32c2266ea6014d1f31cd96ef54a155fa5f07642548a9f,PodSandboxId:eb424fbb490db28bef3d8dbf82d410d397cfd402eb3faac07ba3a174221a620e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722279887524150874,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d7fa5f82e2cfb7f4316f748e51228a62bcf15c90cc7a6f2f50c04e84fc6cfb,PodSandboxId:08b1ebb8c2b21bd270b8d1b41c670ee096631918c3e48940cf706bc719221c11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722279608178041451,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c2057da-0dea-4d82-8261-28ef608998a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.409946291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=deed68f0-ec24-4567-823f-8cf51547efca name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.410031864Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=deed68f0-ec24-4567-823f-8cf51547efca name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.412313947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5004b289-cf18-4262-8419-29d842c87ebc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.412847134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280449412824775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5004b289-cf18-4262-8419-29d842c87ebc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.413421272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d7f091c-0fcc-48f9-8c38-61473c17aa7a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.413490081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d7f091c-0fcc-48f9-8c38-61473c17aa7a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:09 no-preload-524369 crio[730]: time="2024-07-29 19:14:09.413746330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ca77d323edde3a471cc09829107e4b60723540d97038b6563f3515f5632480,PodSandboxId:401d89d64492aacbf7f3a9be7bb29b22e09fe57410a3a90c0d30ea6acb1c67d4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279900306681926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c9ecca1-222d-423a-a4a8-617b3b5dceaf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05dd5140f888cf33e8da2f08c25b80769c5b124f7e0d009eda79f69f0198a964,PodSandboxId:1fab013325b34608f64e4ea772c3b59e00e46345163f35e2d1517f5cba538bf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899738581731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sqjsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afcfe5e-4f63-47fc-a382-d2485c80fd87,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2c6bd4e858e35e487a738492bd7390c9318baf1dbe8f3e7fb1be5aba587f35,PodSandboxId:40866c15eaf8a6a5e4f9936d43e712088192d4dfd4ed7502d7614e27d52dc512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899807179379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w7ptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
9df116-aead-4d87-ade9-397d402c6a9b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf1d196ad19dcb91c3e74f025026a283ee2d7119044448e7baeffdf7d3e36bd,PodSandboxId:2078f2612db573faa57e329b4061a9f9c54a65348237c13eb76ef7585cdb5886,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722279898613706363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzrdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047bc0eb-0615-4a77-a835-99a264b0b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774b6f05ee3602d6336ae397ef0e99a0c2efdb0e8e875055fb477f14ad7c18d9,PodSandboxId:01fb15d87a6fd9f15dadac3916ec96d1101ffd640b5d737afa885cd788903915,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722279887600009834,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f315ab615ee060c6dba20ec59c10b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565b7d4870cc8f097db55c6187e1917737e3e89601c40ad729ce4bcc12cef063,PodSandboxId:c1b2379aa4108f6d02bf5d9bb8dfe8f16a7005172004cdb6a2fc4f529ce9ca1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722279887650207443,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c92beaad6c72a3c97d4dc7f6d1bd4,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db5c6899215ea5671d54ffdbef718a3ed7472a7e909b713e0cd3e3924b9aff5a,PodSandboxId:2c2158c077b71d9cb1ecf5b2a581ed85e76fc457a84c3d5c0c331fef9f20f319,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722279887611204045,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd43d18d99a26b96f54dd633497cef2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0123ed63d3bb67b7ed32c2266ea6014d1f31cd96ef54a155fa5f07642548a9f,PodSandboxId:eb424fbb490db28bef3d8dbf82d410d397cfd402eb3faac07ba3a174221a620e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722279887524150874,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d7fa5f82e2cfb7f4316f748e51228a62bcf15c90cc7a6f2f50c04e84fc6cfb,PodSandboxId:08b1ebb8c2b21bd270b8d1b41c670ee096631918c3e48940cf706bc719221c11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722279608178041451,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d7f091c-0fcc-48f9-8c38-61473c17aa7a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	39ca77d323edd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   401d89d64492a       storage-provisioner
	ab2c6bd4e858e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   40866c15eaf8a       coredns-5cfdc65f69-w7ptq
	05dd5140f888c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1fab013325b34       coredns-5cfdc65f69-sqjsh
	ecf1d196ad19d       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   2078f2612db57       kube-proxy-fzrdv
	565b7d4870cc8       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   c1b2379aa4108       etcd-no-preload-524369
	db5c6899215ea       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   2c2158c077b71       kube-controller-manager-no-preload-524369
	774b6f05ee360       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   01fb15d87a6fd       kube-scheduler-no-preload-524369
	b0123ed63d3bb       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   eb424fbb490db       kube-apiserver-no-preload-524369
	93d7fa5f82e2c       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   08b1ebb8c2b21       kube-apiserver-no-preload-524369
	
	
	==> coredns [05dd5140f888cf33e8da2f08c25b80769c5b124f7e0d009eda79f69f0198a964] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ab2c6bd4e858e35e487a738492bd7390c9318baf1dbe8f3e7fb1be5aba587f35] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-524369
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-524369
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=no-preload-524369
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_04_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:04:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-524369
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:14:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:10:09 +0000   Mon, 29 Jul 2024 19:04:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:10:09 +0000   Mon, 29 Jul 2024 19:04:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:10:09 +0000   Mon, 29 Jul 2024 19:04:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:10:09 +0000   Mon, 29 Jul 2024 19:04:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.7
	  Hostname:    no-preload-524369
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a198bd9cf7443ae868352cb5dee02a8
	  System UUID:                2a198bd9-cf74-43ae-8683-52cb5dee02a8
	  Boot ID:                    e2d860a1-cb75-47b3-a4d7-33e5fbc5df5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-sqjsh                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 coredns-5cfdc65f69-w7ptq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 etcd-no-preload-524369                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-apiserver-no-preload-524369             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-controller-manager-no-preload-524369    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-fzrdv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	  kube-system                 kube-scheduler-no-preload-524369             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-78fcd8795b-l6hjr              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m23s)  kubelet          Node no-preload-524369 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m23s)  kubelet          Node no-preload-524369 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m23s)  kubelet          Node no-preload-524369 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s                  kubelet          Node no-preload-524369 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s                  kubelet          Node no-preload-524369 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s                  kubelet          Node no-preload-524369 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s                  node-controller  Node no-preload-524369 event: Registered Node no-preload-524369 in Controller
	
	
	==> dmesg <==
	[  +0.046879] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.147467] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.555208] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606893] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.637768] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.053048] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055013] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.191053] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.145799] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.275273] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[Jul29 19:00] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.060068] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.745757] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +4.522712] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.919707] kauditd_printk_skb: 86 callbacks suppressed
	[ +26.106375] kauditd_printk_skb: 3 callbacks suppressed
	[Jul29 19:04] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.503919] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +4.689688] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.892769] systemd-fstab-generator[3252]: Ignoring "noauto" option for root device
	[  +5.424249] systemd-fstab-generator[3368]: Ignoring "noauto" option for root device
	[  +0.116696] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 19:05] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [565b7d4870cc8f097db55c6187e1917737e3e89601c40ad729ce4bcc12cef063] <==
	{"level":"info","ts":"2024-07-29T19:04:48.98185Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:04:48.982505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:04:48.984096Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:04:48.984943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.7:2379"}
	{"level":"info","ts":"2024-07-29T19:04:48.985395Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf59147548200944","local-member-id":"c5acdd18885776dc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:04:48.986924Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:04:48.987131Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:12:19.166056Z","caller":"traceutil/trace.go:171","msg":"trace[754036214] linearizableReadLoop","detail":"{readStateIndex:959; appliedIndex:958; }","duration":"385.098583ms","start":"2024-07-29T19:12:18.780867Z","end":"2024-07-29T19:12:19.165966Z","steps":["trace[754036214] 'read index received'  (duration: 384.921586ms)","trace[754036214] 'applied index is now lower than readState.Index'  (duration: 176.466µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T19:12:19.166225Z","caller":"traceutil/trace.go:171","msg":"trace[1167817960] transaction","detail":"{read_only:false; response_revision:856; number_of_response:1; }","duration":"586.660456ms","start":"2024-07-29T19:12:18.579556Z","end":"2024-07-29T19:12:19.166216Z","steps":["trace[1167817960] 'process raft request'  (duration: 586.282809ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.167005Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:18.579531Z","time spent":"586.715014ms","remote":"127.0.0.1:39992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-nrajhd2gmtrsbqymzeo77wiq6y\" mod_revision:848 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-nrajhd2gmtrsbqymzeo77wiq6y\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-nrajhd2gmtrsbqymzeo77wiq6y\" > >"}
	{"level":"warn","ts":"2024-07-29T19:12:19.167229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"386.347065ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-07-29T19:12:19.167376Z","caller":"traceutil/trace.go:171","msg":"trace[1555561073] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:856; }","duration":"386.495398ms","start":"2024-07-29T19:12:18.780863Z","end":"2024-07-29T19:12:19.167358Z","steps":["trace[1555561073] 'agreement among raft nodes before linearized reading'  (duration: 386.263983ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.167402Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:18.780831Z","time spent":"386.562855ms","remote":"127.0.0.1:39916","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1142,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-07-29T19:12:19.167529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.257138ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.167565Z","caller":"traceutil/trace.go:171","msg":"trace[1917566478] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:856; }","duration":"354.293166ms","start":"2024-07-29T19:12:18.813266Z","end":"2024-07-29T19:12:19.167559Z","steps":["trace[1917566478] 'agreement among raft nodes before linearized reading'  (duration: 354.246203ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.168031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.174502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.168108Z","caller":"traceutil/trace.go:171","msg":"trace[1547239571] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:856; }","duration":"300.269384ms","start":"2024-07-29T19:12:18.867832Z","end":"2024-07-29T19:12:19.168102Z","steps":["trace[1547239571] 'agreement among raft nodes before linearized reading'  (duration: 300.044313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.168134Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:18.86779Z","time spent":"300.338658ms","remote":"127.0.0.1:40036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":29,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-07-29T19:12:19.772357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"485.827759ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8564880020026481334 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:855 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T19:12:19.772456Z","caller":"traceutil/trace.go:171","msg":"trace[861697580] linearizableReadLoop","detail":"{readStateIndex:960; appliedIndex:959; }","duration":"394.397689ms","start":"2024-07-29T19:12:19.378045Z","end":"2024-07-29T19:12:19.772442Z","steps":["trace[861697580] 'read index received'  (duration: 27.437µs)","trace[861697580] 'applied index is now lower than readState.Index'  (duration: 394.369037ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:12:19.772572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"394.51736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.772703Z","caller":"traceutil/trace.go:171","msg":"trace[518326760] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:857; }","duration":"394.652062ms","start":"2024-07-29T19:12:19.37804Z","end":"2024-07-29T19:12:19.772692Z","steps":["trace[518326760] 'agreement among raft nodes before linearized reading'  (duration: 394.447526ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.77275Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:19.378002Z","time spent":"394.739298ms","remote":"127.0.0.1:39938","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-07-29T19:12:19.773052Z","caller":"traceutil/trace.go:171","msg":"trace[372355499] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"601.058857ms","start":"2024-07-29T19:12:19.17198Z","end":"2024-07-29T19:12:19.773039Z","steps":["trace[372355499] 'process raft request'  (duration: 114.212232ms)","trace[372355499] 'compare'  (duration: 485.660386ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:12:19.773158Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:19.171965Z","time spent":"601.148816ms","remote":"127.0.0.1:39916","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:855 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 19:14:09 up 14 min,  0 users,  load average: 0.25, 0.24, 0.17
	Linux no-preload-524369 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [93d7fa5f82e2cfb7f4316f748e51228a62bcf15c90cc7a6f2f50c04e84fc6cfb] <==
	W0729 19:04:43.741946       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.742069       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.747568       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.780764       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.785324       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.804057       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.806408       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.823970       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.844710       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.903239       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.923926       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.954784       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.965589       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.027054       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.115266       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.126936       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.210717       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.216189       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.222723       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.238113       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.262938       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.272729       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.300901       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.454522       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.466940       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b0123ed63d3bb67b7ed32c2266ea6014d1f31cd96ef54a155fa5f07642548a9f] <==
	W0729 19:09:51.371462       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:09:51.371776       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 19:09:51.372814       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 19:09:51.372862       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:10:51.373293       1 handler_proxy.go:99] no RequestInfo found in the context
	W0729 19:10:51.373313       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:10:51.373518       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0729 19:10:51.373582       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 19:10:51.374849       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 19:10:51.374890       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:12:51.375725       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:12:51.375892       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0729 19:12:51.375950       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:12:51.376023       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 19:12:51.377039       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 19:12:51.377165       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [db5c6899215ea5671d54ffdbef718a3ed7472a7e909b713e0cd3e3924b9aff5a] <==
	E0729 19:08:58.306066       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:08:58.364931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:09:28.312997       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:09:28.374075       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:09:58.320019       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:09:58.382049       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:10:09.016520       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-524369"
	E0729 19:10:28.336809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:10:28.389472       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:10:56.252005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="240.422µs"
	E0729 19:10:58.343528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:10:58.396805       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:11:10.248918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="69.953µs"
	E0729 19:11:28.350087       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:11:28.403756       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:11:58.357291       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:11:58.412679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:12:28.365049       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:12:28.420911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:12:58.372502       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:12:58.428449       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:13:28.381302       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:13:28.438409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:13:58.387964       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:13:58.446006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ecf1d196ad19dcb91c3e74f025026a283ee2d7119044448e7baeffdf7d3e36bd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 19:04:59.058510       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 19:04:59.071860       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.7"]
	E0729 19:04:59.071943       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 19:04:59.134763       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 19:04:59.134825       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:04:59.134860       1 server_linux.go:170] "Using iptables Proxier"
	I0729 19:04:59.143573       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 19:04:59.143994       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 19:04:59.144009       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:04:59.148432       1 config.go:104] "Starting endpoint slice config controller"
	I0729 19:04:59.148445       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:04:59.148482       1 config.go:197] "Starting service config controller"
	I0729 19:04:59.148486       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:04:59.149289       1 config.go:326] "Starting node config controller"
	I0729 19:04:59.149298       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:04:59.249359       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:04:59.249454       1 shared_informer.go:320] Caches are synced for node config
	I0729 19:04:59.249465       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [774b6f05ee3602d6336ae397ef0e99a0c2efdb0e8e875055fb477f14ad7c18d9] <==
	W0729 19:04:50.452149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:04:50.452181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:50.452266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:04:50.452297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:50.452370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 19:04:50.452400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:50.452494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:50.452556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:50.452689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:04:50.452738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.320978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:04:51.321021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.412861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:04:51.413058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.446274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:04:51.446397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.534365       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:04:51.534452       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0729 19:04:51.613196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 19:04:51.613292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.620660       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:51.620801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.641392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 19:04:51.642477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0729 19:04:53.227679       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:11:53 no-preload-524369 kubelet[3259]: E0729 19:11:53.299604    3259 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:11:53 no-preload-524369 kubelet[3259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:11:53 no-preload-524369 kubelet[3259]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:11:53 no-preload-524369 kubelet[3259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:11:53 no-preload-524369 kubelet[3259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:12:02 no-preload-524369 kubelet[3259]: E0729 19:12:02.234416    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:12:17 no-preload-524369 kubelet[3259]: E0729 19:12:17.238104    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:12:32 no-preload-524369 kubelet[3259]: E0729 19:12:32.234385    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:12:43 no-preload-524369 kubelet[3259]: E0729 19:12:43.236258    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:12:53 no-preload-524369 kubelet[3259]: E0729 19:12:53.300923    3259 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:12:53 no-preload-524369 kubelet[3259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:12:53 no-preload-524369 kubelet[3259]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:12:53 no-preload-524369 kubelet[3259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:12:53 no-preload-524369 kubelet[3259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:12:56 no-preload-524369 kubelet[3259]: E0729 19:12:56.234400    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:13:10 no-preload-524369 kubelet[3259]: E0729 19:13:10.234984    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:13:24 no-preload-524369 kubelet[3259]: E0729 19:13:24.234991    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:13:36 no-preload-524369 kubelet[3259]: E0729 19:13:36.236277    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:13:48 no-preload-524369 kubelet[3259]: E0729 19:13:48.234119    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:13:53 no-preload-524369 kubelet[3259]: E0729 19:13:53.301062    3259 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:13:53 no-preload-524369 kubelet[3259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:13:53 no-preload-524369 kubelet[3259]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:13:53 no-preload-524369 kubelet[3259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:13:53 no-preload-524369 kubelet[3259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:14:02 no-preload-524369 kubelet[3259]: E0729 19:14:02.234841    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	
	
	==> storage-provisioner [39ca77d323edde3a471cc09829107e4b60723540d97038b6563f3515f5632480] <==
	I0729 19:05:00.401576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:05:00.445056       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:05:00.447069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:05:00.464249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:05:00.475333       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-524369_5e220e07-4372-4972-98c3-b5615c647e2d!
	I0729 19:05:00.473814       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"368411b3-882c-4f89-a9d3-ebf34908c271", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-524369_5e220e07-4372-4972-98c3-b5615c647e2d became leader
	I0729 19:05:00.576177       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-524369_5e220e07-4372-4972-98c3-b5615c647e2d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-524369 -n no-preload-524369
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-524369 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-l6hjr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-524369 describe pod metrics-server-78fcd8795b-l6hjr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-524369 describe pod metrics-server-78fcd8795b-l6hjr: exit status 1 (61.029395ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-l6hjr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-524369 describe pod metrics-server-78fcd8795b-l6hjr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-368536 -n embed-certs-368536
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-368536 -n embed-certs-368536: exit status 3 (3.172037206s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:06:59.629258  156304 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.95:22: connect: no route to host
	E0729 19:06:59.629283  156304 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.95:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-368536 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-368536 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.148635226s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.95:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-368536 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-368536 -n embed-certs-368536
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-368536 -n embed-certs-368536: exit status 3 (3.063549781s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:07:08.841203  156384 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.95:22: connect: no route to host
	E0729 19:07:08.841232  156384 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.95:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-368536" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:07:46.273362   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:08:02.152602   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:08:18.903380   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:08:38.589683   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:08:56.384242   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:09:03.445937   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:09:09.317612   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:09:53.656490   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:10:01.634333   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:10:04.528353   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:10:47.656764   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:10:53.334310   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:11:16.702057   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:11:27.572694   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:11:39.106155   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:12:40.400241   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:12:46.273907   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:13:18.903533   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:13:38.590107   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:14:53.656851   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:15:04.528140   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:15:47.656939   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:15:53.334766   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:16:21.952922   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:16:39.106681   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-834964 -n old-k8s-version-834964
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 2 (218.768468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-834964" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 2 (211.963096ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-834964 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-834964        | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-524369                  | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-524369 --memory=2200                     | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC | 29 Jul 24 19:05 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612270       | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 19:04 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-834964             | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-974855                              | cert-expiration-974855       | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:01 UTC |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-453780             | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-453780                  | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-453780 image list                           | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148539 | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | disable-driver-mounts-148539                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-368536            | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC | 29 Jul 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-368536                 | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:07:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:07:08.883432  156414 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:07:08.883769  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883781  156414 out.go:304] Setting ErrFile to fd 2...
	I0729 19:07:08.883788  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883976  156414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 19:07:08.884534  156414 out.go:298] Setting JSON to false
	I0729 19:07:08.885578  156414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13749,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:07:08.885642  156414 start.go:139] virtualization: kvm guest
	I0729 19:07:08.888023  156414 out.go:177] * [embed-certs-368536] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:07:08.889601  156414 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 19:07:08.889601  156414 notify.go:220] Checking for updates...
	I0729 19:07:08.892436  156414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:07:08.893966  156414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:07:08.895257  156414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 19:07:08.896516  156414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:07:08.897746  156414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:07:08.899225  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:07:08.899588  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.899642  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.914943  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0729 19:07:08.915307  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.915885  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.915905  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.916305  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.916486  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.916703  156414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:07:08.917034  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.917074  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.931497  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0729 19:07:08.931857  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.932280  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.932300  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.932640  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.932819  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.968386  156414 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:07:08.969538  156414 start.go:297] selected driver: kvm2
	I0729 19:07:08.969556  156414 start.go:901] validating driver "kvm2" against &{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.969681  156414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:07:08.970358  156414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.970428  156414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:07:08.986808  156414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:07:08.987271  156414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:07:08.987351  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:07:08.987370  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:07:08.987443  156414 start.go:340] cluster config:
	{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.987605  156414 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.989202  156414 out.go:177] * Starting "embed-certs-368536" primary control-plane node in "embed-certs-368536" cluster
	I0729 19:07:08.990452  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:07:08.990496  156414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:07:08.990510  156414 cache.go:56] Caching tarball of preloaded images
	I0729 19:07:08.990604  156414 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:07:08.990618  156414 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:07:08.990746  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:07:08.990949  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:07:08.990994  156414 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:07:08.991013  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:07:08.991023  156414 fix.go:54] fixHost starting: 
	I0729 19:07:08.991314  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.991356  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:09.006100  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 19:07:09.006507  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:09.007034  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:09.007060  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:09.007401  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:09.007594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.007758  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:07:09.009424  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Running err=<nil>
	W0729 19:07:09.009448  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:07:09.011240  156414 out.go:177] * Updating the running kvm2 "embed-certs-368536" VM ...
	I0729 19:07:09.012305  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:07:09.012324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.012506  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:07:09.014924  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015334  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:07:09.015362  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015507  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:07:09.015664  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015796  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:07:09.016106  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:07:09.016288  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:07:09.016300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:07:11.913157  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:14.985074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:21.065092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:24.137179  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:30.217186  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:33.289154  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:41.417004  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:41.417303  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417326  152077 kubeadm.go:310] 
	I0729 19:07:41.417370  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:07:41.417434  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:07:41.417456  152077 kubeadm.go:310] 
	I0729 19:07:41.417514  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:07:41.417586  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:07:41.417720  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:07:41.417732  152077 kubeadm.go:310] 
	I0729 19:07:41.417870  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:07:41.417917  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:07:41.417972  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:07:41.417982  152077 kubeadm.go:310] 
	I0729 19:07:41.418104  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:07:41.418232  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:07:41.418247  152077 kubeadm.go:310] 
	I0729 19:07:41.418370  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:07:41.418477  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:07:41.418596  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:07:41.418696  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:07:41.418706  152077 kubeadm.go:310] 
	I0729 19:07:41.419442  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:07:41.419562  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:07:41.419660  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:07:41.419767  152077 kubeadm.go:394] duration metric: took 8m3.273724985s to StartCluster
	I0729 19:07:41.419853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:07:41.419923  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:07:41.466895  152077 cri.go:89] found id: ""
	I0729 19:07:41.466922  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.466933  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:07:41.466941  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:07:41.467013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:07:41.500836  152077 cri.go:89] found id: ""
	I0729 19:07:41.500876  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.500888  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:07:41.500896  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:07:41.500949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:07:41.534917  152077 cri.go:89] found id: ""
	I0729 19:07:41.534946  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.534958  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:07:41.534968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:07:41.535038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:07:41.570516  152077 cri.go:89] found id: ""
	I0729 19:07:41.570545  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.570556  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:07:41.570565  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:07:41.570640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:07:41.608833  152077 cri.go:89] found id: ""
	I0729 19:07:41.608881  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.608894  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:07:41.608902  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:07:41.608969  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:07:41.644079  152077 cri.go:89] found id: ""
	I0729 19:07:41.644114  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.644127  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:07:41.644136  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:07:41.644198  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:07:41.678991  152077 cri.go:89] found id: ""
	I0729 19:07:41.679019  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.679028  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:07:41.679035  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:07:41.679088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:07:41.722775  152077 cri.go:89] found id: ""
	I0729 19:07:41.722803  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.722815  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:07:41.722829  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:07:41.722857  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:07:41.841614  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:07:41.841641  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:07:41.841658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:07:41.945679  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:07:41.945716  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:07:41.984505  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:07:41.984536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:07:42.037376  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:07:42.037418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:07:42.051259  152077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:07:42.051310  152077 out.go:239] * 
	W0729 19:07:42.051369  152077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.051390  152077 out.go:239] * 
	W0729 19:07:42.052280  152077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:07:42.055255  152077 out.go:177] 
	W0729 19:07:42.056299  152077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.056362  152077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:07:42.056391  152077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:07:42.057745  152077 out.go:177] 
	I0729 19:07:42.413268  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:45.481071  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:51.561079  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:54.633091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:00.713100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:03.785182  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:09.865116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:12.937102  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:19.017094  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:22.093191  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:28.169116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:31.241130  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:37.321134  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:40.393092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:46.473118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:49.545166  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:55.625086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:58.697184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:04.777113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:07.849165  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:13.933065  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:17.001100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:23.081133  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:26.153086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:32.233178  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:35.305183  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:41.385184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:44.461106  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:50.537120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:53.609150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:59.689091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:02.761193  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:08.845074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:11.917067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:17.993090  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:21.065137  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:27.145098  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:30.217175  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:36.301060  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:39.369118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:45.449082  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:48.521097  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:54.601120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:57.673200  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:03.753116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:06.825136  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:12.905195  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:15.977118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:22.057076  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:25.129144  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:31.209150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:34.281164  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:40.365067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:43.437113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:46.437452  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:11:46.437532  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.437865  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:11:46.437902  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.438117  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:11:46.439690  156414 machine.go:97] duration metric: took 4m37.427371174s to provisionDockerMachine
	I0729 19:11:46.439733  156414 fix.go:56] duration metric: took 4m37.448711854s for fixHost
	I0729 19:11:46.439746  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 4m37.448735558s
	W0729 19:11:46.439780  156414 start.go:714] error starting host: provision: host is not running
	W0729 19:11:46.439926  156414 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:11:46.439941  156414 start.go:729] Will try again in 5 seconds ...
	I0729 19:11:51.440155  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:11:51.440266  156414 start.go:364] duration metric: took 69.381µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:11:51.440311  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:11:51.440322  156414 fix.go:54] fixHost starting: 
	I0729 19:11:51.440735  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:11:51.440764  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:11:51.455959  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0729 19:11:51.456443  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:11:51.457053  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:11:51.457078  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:11:51.457475  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:11:51.457721  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:11:51.457885  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:11:51.459531  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Stopped err=<nil>
	I0729 19:11:51.459558  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	W0729 19:11:51.459736  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:11:51.461603  156414 out.go:177] * Restarting existing kvm2 VM for "embed-certs-368536" ...
	I0729 19:11:51.462768  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Start
	I0729 19:11:51.462973  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring networks are active...
	I0729 19:11:51.463679  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network default is active
	I0729 19:11:51.464155  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network mk-embed-certs-368536 is active
	I0729 19:11:51.464571  156414 main.go:141] libmachine: (embed-certs-368536) Getting domain xml...
	I0729 19:11:51.465359  156414 main.go:141] libmachine: (embed-certs-368536) Creating domain...
	I0729 19:11:51.794918  156414 main.go:141] libmachine: (embed-certs-368536) Waiting to get IP...
	I0729 19:11:51.795679  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:51.796159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:51.796221  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:51.796150  157493 retry.go:31] will retry after 235.729444ms: waiting for machine to come up
	I0729 19:11:52.033554  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.034180  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.034207  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.034103  157493 retry.go:31] will retry after 323.595446ms: waiting for machine to come up
	I0729 19:11:52.359640  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.360204  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.360233  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.360143  157493 retry.go:31] will retry after 341.954873ms: waiting for machine to come up
	I0729 19:11:52.703779  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.704350  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.704373  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.704298  157493 retry.go:31] will retry after 385.738264ms: waiting for machine to come up
	I0729 19:11:53.091976  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.092451  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.092484  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.092397  157493 retry.go:31] will retry after 759.811264ms: waiting for machine to come up
	I0729 19:11:53.853241  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.853799  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.853826  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.853755  157493 retry.go:31] will retry after 761.294244ms: waiting for machine to come up
	I0729 19:11:54.616298  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:54.617009  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:54.617039  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:54.616952  157493 retry.go:31] will retry after 1.056936741s: waiting for machine to come up
	I0729 19:11:55.675047  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:55.675491  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:55.675514  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:55.675442  157493 retry.go:31] will retry after 1.232760679s: waiting for machine to come up
	I0729 19:11:56.909745  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:56.910283  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:56.910309  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:56.910228  157493 retry.go:31] will retry after 1.432617964s: waiting for machine to come up
	I0729 19:11:58.344399  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:58.345006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:58.345033  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:58.344957  157493 retry.go:31] will retry after 1.914060621s: waiting for machine to come up
	I0729 19:12:00.262146  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:00.262707  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:00.262739  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:00.262638  157493 retry.go:31] will retry after 2.77447957s: waiting for machine to come up
	I0729 19:12:03.039059  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:03.039693  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:03.039718  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:03.039650  157493 retry.go:31] will retry after 2.755354142s: waiting for machine to come up
	I0729 19:12:05.797890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:05.798279  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:05.798306  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:05.798234  157493 retry.go:31] will retry after 4.501451096s: waiting for machine to come up
	I0729 19:12:10.304116  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304586  156414 main.go:141] libmachine: (embed-certs-368536) Found IP for machine: 192.168.50.95
	I0729 19:12:10.304616  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has current primary IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304626  156414 main.go:141] libmachine: (embed-certs-368536) Reserving static IP address...
	I0729 19:12:10.305096  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.305134  156414 main.go:141] libmachine: (embed-certs-368536) DBG | skip adding static IP to network mk-embed-certs-368536 - found existing host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"}
	I0729 19:12:10.305151  156414 main.go:141] libmachine: (embed-certs-368536) Reserved static IP address: 192.168.50.95
	I0729 19:12:10.305166  156414 main.go:141] libmachine: (embed-certs-368536) Waiting for SSH to be available...
	I0729 19:12:10.305184  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Getting to WaitForSSH function...
	I0729 19:12:10.307568  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.307936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.307972  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.308079  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH client type: external
	I0729 19:12:10.308110  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa (-rw-------)
	I0729 19:12:10.308140  156414 main.go:141] libmachine: (embed-certs-368536) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:12:10.308157  156414 main.go:141] libmachine: (embed-certs-368536) DBG | About to run SSH command:
	I0729 19:12:10.308170  156414 main.go:141] libmachine: (embed-certs-368536) DBG | exit 0
	I0729 19:12:10.436958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | SSH cmd err, output: <nil>: 
	I0729 19:12:10.437313  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetConfigRaw
	I0729 19:12:10.437962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.440164  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440520  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.440545  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440930  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:12:10.441184  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:12:10.441207  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:10.441430  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.443802  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444155  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.444187  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444375  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.444541  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444719  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444897  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.445064  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.445289  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.445300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:12:10.557371  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:12:10.557405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557675  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:12:10.557706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557905  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.560444  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.560793  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.560819  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.561027  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.561249  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561442  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.561783  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.561990  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.562006  156414 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-368536 && echo "embed-certs-368536" | sudo tee /etc/hostname
	I0729 19:12:10.688844  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-368536
	
	I0729 19:12:10.688891  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.691701  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.692102  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692293  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.692468  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692660  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692821  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.693024  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.693222  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.693245  156414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-368536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-368536/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-368536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:12:10.814346  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:12:10.814376  156414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 19:12:10.814400  156414 buildroot.go:174] setting up certificates
	I0729 19:12:10.814409  156414 provision.go:84] configureAuth start
	I0729 19:12:10.814417  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.814706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.817503  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.817859  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.817886  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.818025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.820077  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820443  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.820473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820617  156414 provision.go:143] copyHostCerts
	I0729 19:12:10.820703  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 19:12:10.820713  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 19:12:10.820781  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 19:12:10.820905  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 19:12:10.820914  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 19:12:10.820943  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 19:12:10.821005  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 19:12:10.821013  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 19:12:10.821041  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 19:12:10.821115  156414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.embed-certs-368536 san=[127.0.0.1 192.168.50.95 embed-certs-368536 localhost minikube]
	I0729 19:12:10.867595  156414 provision.go:177] copyRemoteCerts
	I0729 19:12:10.867661  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:12:10.867685  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.870501  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.870857  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.870881  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.871063  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.871267  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.871428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.871595  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:10.956269  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 19:12:10.981554  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:12:11.006055  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:12:11.031709  156414 provision.go:87] duration metric: took 217.286178ms to configureAuth
	I0729 19:12:11.031746  156414 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:12:11.031924  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:12:11.032088  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.034772  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035129  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.035159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.035583  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035737  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035863  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.036016  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.036203  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.036218  156414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:12:11.321782  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:12:11.321810  156414 machine.go:97] duration metric: took 880.608535ms to provisionDockerMachine
	I0729 19:12:11.321823  156414 start.go:293] postStartSetup for "embed-certs-368536" (driver="kvm2")
	I0729 19:12:11.321834  156414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:12:11.321850  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.322243  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:12:11.322281  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.325081  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325437  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.325459  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325612  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.325841  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.326025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.326177  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.412548  156414 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:12:11.417360  156414 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:12:11.417386  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 19:12:11.417466  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 19:12:11.417538  156414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 19:12:11.417632  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:12:11.429176  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:11.457333  156414 start.go:296] duration metric: took 135.490005ms for postStartSetup
	I0729 19:12:11.457401  156414 fix.go:56] duration metric: took 20.017075153s for fixHost
	I0729 19:12:11.457428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.460243  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460653  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.460702  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460873  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.461060  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461233  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461408  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.461586  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.461763  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.461779  156414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:12:11.573697  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722280331.531987438
	
	I0729 19:12:11.573724  156414 fix.go:216] guest clock: 1722280331.531987438
	I0729 19:12:11.573735  156414 fix.go:229] Guest: 2024-07-29 19:12:11.531987438 +0000 UTC Remote: 2024-07-29 19:12:11.457406225 +0000 UTC m=+302.608153452 (delta=74.581213ms)
	I0729 19:12:11.573758  156414 fix.go:200] guest clock delta is within tolerance: 74.581213ms
	I0729 19:12:11.573763  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 20.133485011s
	I0729 19:12:11.573782  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.574056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:11.576405  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576760  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.576798  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576988  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577479  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577644  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577737  156414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:12:11.577799  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.577840  156414 ssh_runner.go:195] Run: cat /version.json
	I0729 19:12:11.577869  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.580473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580564  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580840  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580921  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.581056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581203  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581358  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581369  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581554  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581564  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581669  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.581732  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.662254  156414 ssh_runner.go:195] Run: systemctl --version
	I0729 19:12:11.688214  156414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:12:11.838322  156414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:12:11.844435  156414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:12:11.844521  156414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:12:11.859899  156414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:12:11.859923  156414 start.go:495] detecting cgroup driver to use...
	I0729 19:12:11.859990  156414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:12:11.876768  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:12:11.890508  156414 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:12:11.890584  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:12:11.904817  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:12:11.919251  156414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:12:12.053205  156414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:12:12.205975  156414 docker.go:233] disabling docker service ...
	I0729 19:12:12.206041  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:12:12.222129  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:12:12.235288  156414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:12:12.386940  156414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:12:12.503688  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:12:12.518539  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:12:12.538744  156414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:12:12.538805  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.549615  156414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:12:12.549683  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.560565  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.571814  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.582852  156414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:12:12.595237  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.607232  156414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.626183  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.637202  156414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:12:12.647141  156414 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:12:12.647204  156414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:12:12.661099  156414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:12:12.671936  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:12.795418  156414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:12:12.934125  156414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:12:12.934220  156414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:12:12.939058  156414 start.go:563] Will wait 60s for crictl version
	I0729 19:12:12.939123  156414 ssh_runner.go:195] Run: which crictl
	I0729 19:12:12.943972  156414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:12:12.990564  156414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:12:12.990672  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.018852  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.053593  156414 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:12:13.054890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:13.057601  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.057994  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:13.058025  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.058229  156414 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:12:13.062303  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:13.075815  156414 kubeadm.go:883] updating cluster {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:12:13.075989  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:12:13.076042  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:13.112073  156414 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:12:13.112136  156414 ssh_runner.go:195] Run: which lz4
	I0729 19:12:13.116209  156414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:12:13.120509  156414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:12:13.120545  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:12:14.521429  156414 crio.go:462] duration metric: took 1.405252765s to copy over tarball
	I0729 19:12:14.521516  156414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:12:16.740086  156414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218529852s)
	I0729 19:12:16.740129  156414 crio.go:469] duration metric: took 2.218665992s to extract the tarball
	I0729 19:12:16.740140  156414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:12:16.779302  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:16.825787  156414 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:12:16.825823  156414 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:12:16.825832  156414 kubeadm.go:934] updating node { 192.168.50.95 8443 v1.30.3 crio true true} ...
	I0729 19:12:16.825972  156414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-368536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:12:16.826034  156414 ssh_runner.go:195] Run: crio config
	I0729 19:12:16.873701  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:16.873734  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:16.873752  156414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:12:16.873776  156414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-368536 NodeName:embed-certs-368536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:12:16.873923  156414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-368536"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:12:16.873987  156414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:12:16.884427  156414 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:12:16.884530  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:12:16.895097  156414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 19:12:16.914112  156414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:12:16.931797  156414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 19:12:16.949824  156414 ssh_runner.go:195] Run: grep 192.168.50.95	control-plane.minikube.internal$ /etc/hosts
	I0729 19:12:16.953765  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:16.967138  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:17.088789  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:12:17.108577  156414 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536 for IP: 192.168.50.95
	I0729 19:12:17.108601  156414 certs.go:194] generating shared ca certs ...
	I0729 19:12:17.108623  156414 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:12:17.108831  156414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 19:12:17.109076  156414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 19:12:17.109118  156414 certs.go:256] generating profile certs ...
	I0729 19:12:17.109296  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/client.key
	I0729 19:12:17.109394  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key.77ca8755
	I0729 19:12:17.109448  156414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key
	I0729 19:12:17.109619  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 19:12:17.109651  156414 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 19:12:17.109661  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 19:12:17.109688  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 19:12:17.109722  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:12:17.109756  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 19:12:17.109819  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:17.110926  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:12:17.142003  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 19:12:17.176798  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:12:17.204272  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 19:12:17.244558  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:12:17.281935  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:12:17.306456  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:12:17.332576  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:12:17.357372  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 19:12:17.380363  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 19:12:17.403737  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:12:17.428493  156414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:12:17.445767  156414 ssh_runner.go:195] Run: openssl version
	I0729 19:12:17.451514  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 19:12:17.463663  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469467  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469523  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.475754  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:12:17.487617  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:12:17.498699  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503205  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503257  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.508880  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:12:17.519570  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 19:12:17.530630  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535004  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535046  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.540647  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 19:12:17.551719  156414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:12:17.556199  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:12:17.562469  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:12:17.568561  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:12:17.574653  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:12:17.580458  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:12:17.586392  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:12:17.592304  156414 kubeadm.go:392] StartCluster: {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:12:17.592422  156414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:12:17.592467  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.631628  156414 cri.go:89] found id: ""
	I0729 19:12:17.631701  156414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:12:17.642636  156414 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:12:17.642665  156414 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:12:17.642731  156414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:12:17.652551  156414 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:12:17.653603  156414 kubeconfig.go:125] found "embed-certs-368536" server: "https://192.168.50.95:8443"
	I0729 19:12:17.656022  156414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:12:17.666987  156414 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.95
	I0729 19:12:17.667014  156414 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:12:17.667026  156414 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:12:17.667065  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.709534  156414 cri.go:89] found id: ""
	I0729 19:12:17.709598  156414 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:12:17.729709  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:12:17.739968  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:12:17.739990  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:12:17.740051  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:12:17.749727  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:12:17.749794  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:12:17.760013  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:12:17.781070  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:12:17.781135  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:12:17.794747  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.805862  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:12:17.805938  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.815892  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:12:17.825005  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:12:17.825072  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:12:17.834745  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:12:17.844191  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:17.963586  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.325254  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.361626819s)
	I0729 19:12:19.325289  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.537565  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.611697  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.710291  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:12:19.710408  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.210809  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.711234  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.727361  156414 api_server.go:72] duration metric: took 1.017067714s to wait for apiserver process to appear ...
	I0729 19:12:20.727396  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:12:20.727432  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:20.727961  156414 api_server.go:269] stopped: https://192.168.50.95:8443/healthz: Get "https://192.168.50.95:8443/healthz": dial tcp 192.168.50.95:8443: connect: connection refused
	I0729 19:12:21.228408  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.622048  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.622093  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.622106  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.628174  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.628195  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.728403  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.732920  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:23.732969  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.227954  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.233109  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.233143  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.727641  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.735127  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.735156  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.227686  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.236914  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.236947  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.728500  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.735276  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.735307  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.227824  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.232439  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.232471  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.728521  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.732952  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.732989  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:27.227504  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:27.232166  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:12:27.238505  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:12:27.238531  156414 api_server.go:131] duration metric: took 6.511128129s to wait for apiserver health ...
	I0729 19:12:27.238541  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:27.238560  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:27.240943  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:12:27.242526  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:12:27.254578  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:12:27.274700  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:12:27.289359  156414 system_pods.go:59] 8 kube-system pods found
	I0729 19:12:27.289412  156414 system_pods.go:61] "coredns-7db6d8ff4d-dww2j" [31418aeb-98be-4b43-a687-5c8f64ec5e9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:12:27.289424  156414 system_pods.go:61] "etcd-embed-certs-368536" [0a7aac00-9161-473a-87f1-2702211beaac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:12:27.289443  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [d6638a87-43df-457e-b335-1ab2aa03f421] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:12:27.289469  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [76fefc23-7794-40fc-845a-73d83eb55450] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:12:27.289478  156414 system_pods.go:61] "kube-proxy-9xwt2" [f637605b-9bc2-4922-b801-04f681a81e7c] Running
	I0729 19:12:27.289485  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [476e1d42-6610-44b5-b77b-b25537f044eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:12:27.289495  156414 system_pods.go:61] "metrics-server-569cc877fc-xnkwq" [19328b03-8c7d-499f-9a30-31f023605e49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:12:27.289500  156414 system_pods.go:61] "storage-provisioner" [b049d065-fbb1-48b6-a377-2938a0519c78] Running
	I0729 19:12:27.289513  156414 system_pods.go:74] duration metric: took 14.789212ms to wait for pod list to return data ...
	I0729 19:12:27.289525  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:12:27.292692  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:12:27.292716  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:12:27.292809  156414 node_conditions.go:105] duration metric: took 3.278515ms to run NodePressure ...
	I0729 19:12:27.292825  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:27.563167  156414 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567532  156414 kubeadm.go:739] kubelet initialised
	I0729 19:12:27.567552  156414 kubeadm.go:740] duration metric: took 4.363687ms waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567566  156414 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:12:27.573549  156414 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:29.580511  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:32.080352  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:34.079542  156414 pod_ready.go:92] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:34.079564  156414 pod_ready.go:81] duration metric: took 6.505994438s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:34.079574  156414 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:36.086785  156414 pod_ready.go:102] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:37.089587  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.089611  156414 pod_ready.go:81] duration metric: took 3.010030448s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.089621  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093814  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.093833  156414 pod_ready.go:81] duration metric: took 4.206508ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093842  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097603  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.097621  156414 pod_ready.go:81] duration metric: took 3.772149ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097633  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101696  156414 pod_ready.go:92] pod "kube-proxy-9xwt2" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.101720  156414 pod_ready.go:81] duration metric: took 4.078653ms for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101732  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107711  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.107728  156414 pod_ready.go:81] duration metric: took 5.989066ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107738  156414 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:39.113796  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:41.115401  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:43.614461  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:45.614900  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:48.113738  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:50.114364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:52.114790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:54.613472  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:56.613889  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:59.114206  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:01.614516  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:04.114859  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:06.114993  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:08.615213  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:11.114015  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:13.114342  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:15.614423  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:18.114442  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:20.614465  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:23.117909  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:25.614155  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:28.114746  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:30.613689  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:32.614790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:35.113716  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:37.116362  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:39.614545  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:42.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:44.114654  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:46.114765  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:48.615523  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:51.114096  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:53.114180  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:55.614278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:58.114138  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:00.613679  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:02.614878  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:05.117278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:07.614025  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:09.614681  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:12.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:14.114458  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:16.614364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:19.114533  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:21.613756  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:24.114325  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:26.614276  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:29.114137  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:31.114274  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:33.115749  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:35.614067  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:37.614374  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:39.615618  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:42.114139  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:44.114503  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:46.114624  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:48.613926  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:50.614527  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:53.115129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:55.613563  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:57.615164  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:59.616129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:02.114384  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:04.114621  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:06.114864  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:08.115242  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:10.613949  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:13.115359  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:15.614560  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:17.615109  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:20.114341  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:22.115253  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:24.119792  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:26.614361  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:29.113806  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:31.114150  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:33.614207  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:35.616204  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:38.113264  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:40.615054  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:42.615127  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:45.115119  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:47.613589  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:49.613803  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:51.615235  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:54.113908  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:56.614614  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:59.114193  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:01.614642  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:04.114186  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:06.614156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:08.614216  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:10.615368  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:13.116263  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:15.613987  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:17.614183  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:19.617124  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:22.114156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:24.613643  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:26.613720  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:28.616174  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:31.114289  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:33.114818  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:35.614735  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:37.107998  156414 pod_ready.go:81] duration metric: took 4m0.000241864s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	E0729 19:16:37.108045  156414 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:16:37.108068  156414 pod_ready.go:38] duration metric: took 4m9.540493845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:16:37.108105  156414 kubeadm.go:597] duration metric: took 4m19.465427343s to restartPrimaryControlPlane
	W0729 19:16:37.108167  156414 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:16:37.108196  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	
	
	==> CRI-O <==
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.334310123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280604334271525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cee3a30c-5a6f-4938-b64b-fa512bdc0cca name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.334926184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37499d40-dc3b-4d6e-bcf8-2ade0885cda2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.335002462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37499d40-dc3b-4d6e-bcf8-2ade0885cda2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.335034532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=37499d40-dc3b-4d6e-bcf8-2ade0885cda2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.365414010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8f09d5b-f7af-4681-ab07-e07539751ffe name=/runtime.v1.RuntimeService/Version
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.365558578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8f09d5b-f7af-4681-ab07-e07539751ffe name=/runtime.v1.RuntimeService/Version
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.366617257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d22d5a3-3532-45e6-ad9a-60d2d0132fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.367036132Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280604367012779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d22d5a3-3532-45e6-ad9a-60d2d0132fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.367628130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1696a2b8-516d-4671-a9bc-8b531107c05f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.367708981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1696a2b8-516d-4671-a9bc-8b531107c05f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.367753023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1696a2b8-516d-4671-a9bc-8b531107c05f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.402530047Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=528a5ea7-6fd1-4f88-8d3f-dbae7eb704a8 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.402601125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=528a5ea7-6fd1-4f88-8d3f-dbae7eb704a8 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.404093059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=827d6974-5d21-4654-9c79-cc5d056b35cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.404644412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280604404604028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=827d6974-5d21-4654-9c79-cc5d056b35cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.405205856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=900e9982-1c29-445a-853f-b5a064b6f8bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.405307946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=900e9982-1c29-445a-853f-b5a064b6f8bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.405344863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=900e9982-1c29-445a-853f-b5a064b6f8bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.437564041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8f9701c-f327-4582-8afa-f3059e3be067 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.437655635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8f9701c-f327-4582-8afa-f3059e3be067 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.438656268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6d628cc-8d81-4ec6-9244-4b0bd4a4545a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.439016484Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280604438996445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6d628cc-8d81-4ec6-9244-4b0bd4a4545a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.439512572Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=142ce24f-1b7e-470f-8348-310fa22b8aba name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.439566744Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=142ce24f-1b7e-470f-8348-310fa22b8aba name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:16:44 old-k8s-version-834964 crio[654]: time="2024-07-29 19:16:44.439595570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=142ce24f-1b7e-470f-8348-310fa22b8aba name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057102] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044455] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.946781] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.486761] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.581640] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.337375] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.060537] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068240] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.202307] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.149657] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.264127] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +5.997739] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.070545] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.872502] systemd-fstab-generator[966]: Ignoring "noauto" option for root device
	[ +12.278643] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 19:03] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[Jul29 19:05] systemd-fstab-generator[5309]: Ignoring "noauto" option for root device
	[  +0.065530] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:16:44 up 17 min,  0 users,  load average: 0.15, 0.08, 0.06
	Linux old-k8s-version-834964 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000bbeb20, 0x0, 0x0, 0x0)
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000578b40)
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]: goroutine 149 [runnable]:
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000578230, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001d0d20, 0x0, 0x0)
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0001d4c40)
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 29 19:16:41 old-k8s-version-834964 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 29 19:16:41 old-k8s-version-834964 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 19:16:41 old-k8s-version-834964 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 19:16:42 old-k8s-version-834964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 29 19:16:42 old-k8s-version-834964 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 19:16:42 old-k8s-version-834964 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 19:16:42 old-k8s-version-834964 kubelet[6489]: I0729 19:16:42.275786    6489 server.go:416] Version: v1.20.0
	Jul 29 19:16:42 old-k8s-version-834964 kubelet[6489]: I0729 19:16:42.275998    6489 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 19:16:42 old-k8s-version-834964 kubelet[6489]: I0729 19:16:42.278104    6489 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 19:16:42 old-k8s-version-834964 kubelet[6489]: W0729 19:16:42.279056    6489 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 19:16:42 old-k8s-version-834964 kubelet[6489]: I0729 19:16:42.279174    6489 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-834964 -n old-k8s-version-834964
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 2 (217.64278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-834964" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (428.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 19:20:39.451756469 +0000 UTC m=+6449.422460238
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-612270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-612270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.064µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-612270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-612270 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-612270 logs -n 25: (1.200094165s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-524369 --memory=2200                     | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC | 29 Jul 24 19:05 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612270       | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 19:04 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-834964             | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-974855                              | cert-expiration-974855       | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:01 UTC |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-453780             | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-453780                  | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-453780 image list                           | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148539 | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | disable-driver-mounts-148539                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-368536            | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC | 29 Jul 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-368536                 | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:07 UTC | 29 Jul 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 19:19 UTC | 29 Jul 24 19:19 UTC |
	| delete  | -p no-preload-524369                                   | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 19:20 UTC | 29 Jul 24 19:20 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:07:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:07:08.883432  156414 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:07:08.883769  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883781  156414 out.go:304] Setting ErrFile to fd 2...
	I0729 19:07:08.883788  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883976  156414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 19:07:08.884534  156414 out.go:298] Setting JSON to false
	I0729 19:07:08.885578  156414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13749,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:07:08.885642  156414 start.go:139] virtualization: kvm guest
	I0729 19:07:08.888023  156414 out.go:177] * [embed-certs-368536] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:07:08.889601  156414 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 19:07:08.889601  156414 notify.go:220] Checking for updates...
	I0729 19:07:08.892436  156414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:07:08.893966  156414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:07:08.895257  156414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 19:07:08.896516  156414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:07:08.897746  156414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:07:08.899225  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:07:08.899588  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.899642  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.914943  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0729 19:07:08.915307  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.915885  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.915905  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.916305  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.916486  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.916703  156414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:07:08.917034  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.917074  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.931497  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0729 19:07:08.931857  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.932280  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.932300  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.932640  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.932819  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.968386  156414 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:07:08.969538  156414 start.go:297] selected driver: kvm2
	I0729 19:07:08.969556  156414 start.go:901] validating driver "kvm2" against &{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.969681  156414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:07:08.970358  156414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.970428  156414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:07:08.986808  156414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:07:08.987271  156414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:07:08.987351  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:07:08.987370  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:07:08.987443  156414 start.go:340] cluster config:
	{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.987605  156414 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.989202  156414 out.go:177] * Starting "embed-certs-368536" primary control-plane node in "embed-certs-368536" cluster
	I0729 19:07:08.990452  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:07:08.990496  156414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:07:08.990510  156414 cache.go:56] Caching tarball of preloaded images
	I0729 19:07:08.990604  156414 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:07:08.990618  156414 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:07:08.990746  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:07:08.990949  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:07:08.990994  156414 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:07:08.991013  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:07:08.991023  156414 fix.go:54] fixHost starting: 
	I0729 19:07:08.991314  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.991356  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:09.006100  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 19:07:09.006507  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:09.007034  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:09.007060  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:09.007401  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:09.007594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.007758  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:07:09.009424  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Running err=<nil>
	W0729 19:07:09.009448  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:07:09.011240  156414 out.go:177] * Updating the running kvm2 "embed-certs-368536" VM ...
	I0729 19:07:09.012305  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:07:09.012324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.012506  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:07:09.014924  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015334  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:07:09.015362  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015507  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:07:09.015664  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015796  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:07:09.016106  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:07:09.016288  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:07:09.016300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:07:11.913157  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:14.985074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:21.065092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:24.137179  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:30.217186  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:33.289154  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:41.417004  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:41.417303  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417326  152077 kubeadm.go:310] 
	I0729 19:07:41.417370  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:07:41.417434  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:07:41.417456  152077 kubeadm.go:310] 
	I0729 19:07:41.417514  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:07:41.417586  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:07:41.417720  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:07:41.417732  152077 kubeadm.go:310] 
	I0729 19:07:41.417870  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:07:41.417917  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:07:41.417972  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:07:41.417982  152077 kubeadm.go:310] 
	I0729 19:07:41.418104  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:07:41.418232  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:07:41.418247  152077 kubeadm.go:310] 
	I0729 19:07:41.418370  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:07:41.418477  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:07:41.418596  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:07:41.418696  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:07:41.418706  152077 kubeadm.go:310] 
	I0729 19:07:41.419442  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:07:41.419562  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:07:41.419660  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:07:41.419767  152077 kubeadm.go:394] duration metric: took 8m3.273724985s to StartCluster
	I0729 19:07:41.419853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:07:41.419923  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:07:41.466895  152077 cri.go:89] found id: ""
	I0729 19:07:41.466922  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.466933  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:07:41.466941  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:07:41.467013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:07:41.500836  152077 cri.go:89] found id: ""
	I0729 19:07:41.500876  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.500888  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:07:41.500896  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:07:41.500949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:07:41.534917  152077 cri.go:89] found id: ""
	I0729 19:07:41.534946  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.534958  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:07:41.534968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:07:41.535038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:07:41.570516  152077 cri.go:89] found id: ""
	I0729 19:07:41.570545  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.570556  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:07:41.570565  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:07:41.570640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:07:41.608833  152077 cri.go:89] found id: ""
	I0729 19:07:41.608881  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.608894  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:07:41.608902  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:07:41.608969  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:07:41.644079  152077 cri.go:89] found id: ""
	I0729 19:07:41.644114  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.644127  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:07:41.644136  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:07:41.644198  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:07:41.678991  152077 cri.go:89] found id: ""
	I0729 19:07:41.679019  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.679028  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:07:41.679035  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:07:41.679088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:07:41.722775  152077 cri.go:89] found id: ""
	I0729 19:07:41.722803  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.722815  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:07:41.722829  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:07:41.722857  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:07:41.841614  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:07:41.841641  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:07:41.841658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:07:41.945679  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:07:41.945716  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:07:41.984505  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:07:41.984536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:07:42.037376  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:07:42.037418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:07:42.051259  152077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:07:42.051310  152077 out.go:239] * 
	W0729 19:07:42.051369  152077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.051390  152077 out.go:239] * 
	W0729 19:07:42.052280  152077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:07:42.055255  152077 out.go:177] 
	W0729 19:07:42.056299  152077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.056362  152077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:07:42.056391  152077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:07:42.057745  152077 out.go:177] 
	I0729 19:07:42.413268  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:45.481071  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:51.561079  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:54.633091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:00.713100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:03.785182  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:09.865116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:12.937102  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:19.017094  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:22.093191  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:28.169116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:31.241130  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:37.321134  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:40.393092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:46.473118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:49.545166  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:55.625086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:58.697184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:04.777113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:07.849165  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:13.933065  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:17.001100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:23.081133  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:26.153086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:32.233178  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:35.305183  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:41.385184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:44.461106  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:50.537120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:53.609150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:59.689091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:02.761193  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:08.845074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:11.917067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:17.993090  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:21.065137  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:27.145098  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:30.217175  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:36.301060  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:39.369118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:45.449082  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:48.521097  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:54.601120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:57.673200  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:03.753116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:06.825136  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:12.905195  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:15.977118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:22.057076  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:25.129144  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:31.209150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:34.281164  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:40.365067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:43.437113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:46.437452  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:11:46.437532  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.437865  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:11:46.437902  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.438117  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:11:46.439690  156414 machine.go:97] duration metric: took 4m37.427371174s to provisionDockerMachine
	I0729 19:11:46.439733  156414 fix.go:56] duration metric: took 4m37.448711854s for fixHost
	I0729 19:11:46.439746  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 4m37.448735558s
	W0729 19:11:46.439780  156414 start.go:714] error starting host: provision: host is not running
	W0729 19:11:46.439926  156414 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:11:46.439941  156414 start.go:729] Will try again in 5 seconds ...
	I0729 19:11:51.440155  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:11:51.440266  156414 start.go:364] duration metric: took 69.381µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:11:51.440311  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:11:51.440322  156414 fix.go:54] fixHost starting: 
	I0729 19:11:51.440735  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:11:51.440764  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:11:51.455959  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0729 19:11:51.456443  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:11:51.457053  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:11:51.457078  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:11:51.457475  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:11:51.457721  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:11:51.457885  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:11:51.459531  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Stopped err=<nil>
	I0729 19:11:51.459558  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	W0729 19:11:51.459736  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:11:51.461603  156414 out.go:177] * Restarting existing kvm2 VM for "embed-certs-368536" ...
	I0729 19:11:51.462768  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Start
	I0729 19:11:51.462973  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring networks are active...
	I0729 19:11:51.463679  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network default is active
	I0729 19:11:51.464155  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network mk-embed-certs-368536 is active
	I0729 19:11:51.464571  156414 main.go:141] libmachine: (embed-certs-368536) Getting domain xml...
	I0729 19:11:51.465359  156414 main.go:141] libmachine: (embed-certs-368536) Creating domain...
	I0729 19:11:51.794918  156414 main.go:141] libmachine: (embed-certs-368536) Waiting to get IP...
	I0729 19:11:51.795679  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:51.796159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:51.796221  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:51.796150  157493 retry.go:31] will retry after 235.729444ms: waiting for machine to come up
	I0729 19:11:52.033554  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.034180  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.034207  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.034103  157493 retry.go:31] will retry after 323.595446ms: waiting for machine to come up
	I0729 19:11:52.359640  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.360204  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.360233  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.360143  157493 retry.go:31] will retry after 341.954873ms: waiting for machine to come up
	I0729 19:11:52.703779  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.704350  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.704373  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.704298  157493 retry.go:31] will retry after 385.738264ms: waiting for machine to come up
	I0729 19:11:53.091976  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.092451  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.092484  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.092397  157493 retry.go:31] will retry after 759.811264ms: waiting for machine to come up
	I0729 19:11:53.853241  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.853799  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.853826  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.853755  157493 retry.go:31] will retry after 761.294244ms: waiting for machine to come up
	I0729 19:11:54.616298  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:54.617009  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:54.617039  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:54.616952  157493 retry.go:31] will retry after 1.056936741s: waiting for machine to come up
	I0729 19:11:55.675047  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:55.675491  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:55.675514  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:55.675442  157493 retry.go:31] will retry after 1.232760679s: waiting for machine to come up
	I0729 19:11:56.909745  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:56.910283  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:56.910309  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:56.910228  157493 retry.go:31] will retry after 1.432617964s: waiting for machine to come up
	I0729 19:11:58.344399  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:58.345006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:58.345033  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:58.344957  157493 retry.go:31] will retry after 1.914060621s: waiting for machine to come up
	I0729 19:12:00.262146  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:00.262707  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:00.262739  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:00.262638  157493 retry.go:31] will retry after 2.77447957s: waiting for machine to come up
	I0729 19:12:03.039059  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:03.039693  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:03.039718  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:03.039650  157493 retry.go:31] will retry after 2.755354142s: waiting for machine to come up
	I0729 19:12:05.797890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:05.798279  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:05.798306  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:05.798234  157493 retry.go:31] will retry after 4.501451096s: waiting for machine to come up
	I0729 19:12:10.304116  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304586  156414 main.go:141] libmachine: (embed-certs-368536) Found IP for machine: 192.168.50.95
	I0729 19:12:10.304616  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has current primary IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304626  156414 main.go:141] libmachine: (embed-certs-368536) Reserving static IP address...
	I0729 19:12:10.305096  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.305134  156414 main.go:141] libmachine: (embed-certs-368536) DBG | skip adding static IP to network mk-embed-certs-368536 - found existing host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"}
	I0729 19:12:10.305151  156414 main.go:141] libmachine: (embed-certs-368536) Reserved static IP address: 192.168.50.95
	I0729 19:12:10.305166  156414 main.go:141] libmachine: (embed-certs-368536) Waiting for SSH to be available...
	I0729 19:12:10.305184  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Getting to WaitForSSH function...
	I0729 19:12:10.307568  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.307936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.307972  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.308079  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH client type: external
	I0729 19:12:10.308110  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa (-rw-------)
	I0729 19:12:10.308140  156414 main.go:141] libmachine: (embed-certs-368536) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:12:10.308157  156414 main.go:141] libmachine: (embed-certs-368536) DBG | About to run SSH command:
	I0729 19:12:10.308170  156414 main.go:141] libmachine: (embed-certs-368536) DBG | exit 0
	I0729 19:12:10.436958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | SSH cmd err, output: <nil>: 
	I0729 19:12:10.437313  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetConfigRaw
	I0729 19:12:10.437962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.440164  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440520  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.440545  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440930  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:12:10.441184  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:12:10.441207  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:10.441430  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.443802  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444155  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.444187  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444375  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.444541  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444719  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444897  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.445064  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.445289  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.445300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:12:10.557371  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:12:10.557405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557675  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:12:10.557706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557905  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.560444  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.560793  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.560819  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.561027  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.561249  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561442  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.561783  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.561990  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.562006  156414 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-368536 && echo "embed-certs-368536" | sudo tee /etc/hostname
	I0729 19:12:10.688844  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-368536
	
	I0729 19:12:10.688891  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.691701  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.692102  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692293  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.692468  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692660  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692821  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.693024  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.693222  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.693245  156414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-368536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-368536/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-368536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:12:10.814346  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:12:10.814376  156414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 19:12:10.814400  156414 buildroot.go:174] setting up certificates
	I0729 19:12:10.814409  156414 provision.go:84] configureAuth start
	I0729 19:12:10.814417  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.814706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.817503  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.817859  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.817886  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.818025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.820077  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820443  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.820473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820617  156414 provision.go:143] copyHostCerts
	I0729 19:12:10.820703  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 19:12:10.820713  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 19:12:10.820781  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 19:12:10.820905  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 19:12:10.820914  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 19:12:10.820943  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 19:12:10.821005  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 19:12:10.821013  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 19:12:10.821041  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 19:12:10.821115  156414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.embed-certs-368536 san=[127.0.0.1 192.168.50.95 embed-certs-368536 localhost minikube]
	I0729 19:12:10.867595  156414 provision.go:177] copyRemoteCerts
	I0729 19:12:10.867661  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:12:10.867685  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.870501  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.870857  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.870881  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.871063  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.871267  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.871428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.871595  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:10.956269  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 19:12:10.981554  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:12:11.006055  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:12:11.031709  156414 provision.go:87] duration metric: took 217.286178ms to configureAuth
	I0729 19:12:11.031746  156414 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:12:11.031924  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:12:11.032088  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.034772  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035129  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.035159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.035583  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035737  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035863  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.036016  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.036203  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.036218  156414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:12:11.321782  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:12:11.321810  156414 machine.go:97] duration metric: took 880.608535ms to provisionDockerMachine
	I0729 19:12:11.321823  156414 start.go:293] postStartSetup for "embed-certs-368536" (driver="kvm2")
	I0729 19:12:11.321834  156414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:12:11.321850  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.322243  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:12:11.322281  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.325081  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325437  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.325459  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325612  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.325841  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.326025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.326177  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.412548  156414 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:12:11.417360  156414 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:12:11.417386  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 19:12:11.417466  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 19:12:11.417538  156414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 19:12:11.417632  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:12:11.429176  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:11.457333  156414 start.go:296] duration metric: took 135.490005ms for postStartSetup
	I0729 19:12:11.457401  156414 fix.go:56] duration metric: took 20.017075153s for fixHost
	I0729 19:12:11.457428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.460243  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460653  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.460702  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460873  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.461060  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461233  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461408  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.461586  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.461763  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.461779  156414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:12:11.573697  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722280331.531987438
	
	I0729 19:12:11.573724  156414 fix.go:216] guest clock: 1722280331.531987438
	I0729 19:12:11.573735  156414 fix.go:229] Guest: 2024-07-29 19:12:11.531987438 +0000 UTC Remote: 2024-07-29 19:12:11.457406225 +0000 UTC m=+302.608153452 (delta=74.581213ms)
	I0729 19:12:11.573758  156414 fix.go:200] guest clock delta is within tolerance: 74.581213ms
	I0729 19:12:11.573763  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 20.133485011s
	I0729 19:12:11.573782  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.574056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:11.576405  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576760  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.576798  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576988  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577479  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577644  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577737  156414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:12:11.577799  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.577840  156414 ssh_runner.go:195] Run: cat /version.json
	I0729 19:12:11.577869  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.580473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580564  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580840  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580921  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.581056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581203  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581358  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581369  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581554  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581564  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581669  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.581732  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.662254  156414 ssh_runner.go:195] Run: systemctl --version
	I0729 19:12:11.688214  156414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:12:11.838322  156414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:12:11.844435  156414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:12:11.844521  156414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:12:11.859899  156414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:12:11.859923  156414 start.go:495] detecting cgroup driver to use...
	I0729 19:12:11.859990  156414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:12:11.876768  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:12:11.890508  156414 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:12:11.890584  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:12:11.904817  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:12:11.919251  156414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:12:12.053205  156414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:12:12.205975  156414 docker.go:233] disabling docker service ...
	I0729 19:12:12.206041  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:12:12.222129  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:12:12.235288  156414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:12:12.386940  156414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:12:12.503688  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:12:12.518539  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:12:12.538744  156414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:12:12.538805  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.549615  156414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:12:12.549683  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.560565  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.571814  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.582852  156414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:12:12.595237  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.607232  156414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.626183  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.637202  156414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:12:12.647141  156414 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:12:12.647204  156414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:12:12.661099  156414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:12:12.671936  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:12.795418  156414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:12:12.934125  156414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:12:12.934220  156414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:12:12.939058  156414 start.go:563] Will wait 60s for crictl version
	I0729 19:12:12.939123  156414 ssh_runner.go:195] Run: which crictl
	I0729 19:12:12.943972  156414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:12:12.990564  156414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:12:12.990672  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.018852  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.053593  156414 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:12:13.054890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:13.057601  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.057994  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:13.058025  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.058229  156414 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:12:13.062303  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:13.075815  156414 kubeadm.go:883] updating cluster {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:12:13.075989  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:12:13.076042  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:13.112073  156414 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:12:13.112136  156414 ssh_runner.go:195] Run: which lz4
	I0729 19:12:13.116209  156414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:12:13.120509  156414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:12:13.120545  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:12:14.521429  156414 crio.go:462] duration metric: took 1.405252765s to copy over tarball
	I0729 19:12:14.521516  156414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:12:16.740086  156414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218529852s)
	I0729 19:12:16.740129  156414 crio.go:469] duration metric: took 2.218665992s to extract the tarball
	I0729 19:12:16.740140  156414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:12:16.779302  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:16.825787  156414 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:12:16.825823  156414 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:12:16.825832  156414 kubeadm.go:934] updating node { 192.168.50.95 8443 v1.30.3 crio true true} ...
	I0729 19:12:16.825972  156414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-368536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:12:16.826034  156414 ssh_runner.go:195] Run: crio config
	I0729 19:12:16.873701  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:16.873734  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:16.873752  156414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:12:16.873776  156414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-368536 NodeName:embed-certs-368536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:12:16.873923  156414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-368536"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:12:16.873987  156414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:12:16.884427  156414 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:12:16.884530  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:12:16.895097  156414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 19:12:16.914112  156414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:12:16.931797  156414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 19:12:16.949824  156414 ssh_runner.go:195] Run: grep 192.168.50.95	control-plane.minikube.internal$ /etc/hosts
	I0729 19:12:16.953765  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:16.967138  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:17.088789  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:12:17.108577  156414 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536 for IP: 192.168.50.95
	I0729 19:12:17.108601  156414 certs.go:194] generating shared ca certs ...
	I0729 19:12:17.108623  156414 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:12:17.108831  156414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 19:12:17.109076  156414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 19:12:17.109118  156414 certs.go:256] generating profile certs ...
	I0729 19:12:17.109296  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/client.key
	I0729 19:12:17.109394  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key.77ca8755
	I0729 19:12:17.109448  156414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key
	I0729 19:12:17.109619  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 19:12:17.109651  156414 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 19:12:17.109661  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 19:12:17.109688  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 19:12:17.109722  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:12:17.109756  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 19:12:17.109819  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:17.110926  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:12:17.142003  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 19:12:17.176798  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:12:17.204272  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 19:12:17.244558  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:12:17.281935  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:12:17.306456  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:12:17.332576  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:12:17.357372  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 19:12:17.380363  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 19:12:17.403737  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:12:17.428493  156414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:12:17.445767  156414 ssh_runner.go:195] Run: openssl version
	I0729 19:12:17.451514  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 19:12:17.463663  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469467  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469523  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.475754  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:12:17.487617  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:12:17.498699  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503205  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503257  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.508880  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:12:17.519570  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 19:12:17.530630  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535004  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535046  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.540647  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 19:12:17.551719  156414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:12:17.556199  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:12:17.562469  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:12:17.568561  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:12:17.574653  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:12:17.580458  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:12:17.586392  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:12:17.592304  156414 kubeadm.go:392] StartCluster: {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:12:17.592422  156414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:12:17.592467  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.631628  156414 cri.go:89] found id: ""
	I0729 19:12:17.631701  156414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:12:17.642636  156414 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:12:17.642665  156414 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:12:17.642731  156414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:12:17.652551  156414 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:12:17.653603  156414 kubeconfig.go:125] found "embed-certs-368536" server: "https://192.168.50.95:8443"
	I0729 19:12:17.656022  156414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:12:17.666987  156414 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.95
	I0729 19:12:17.667014  156414 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:12:17.667026  156414 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:12:17.667065  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.709534  156414 cri.go:89] found id: ""
	I0729 19:12:17.709598  156414 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:12:17.729709  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:12:17.739968  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:12:17.739990  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:12:17.740051  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:12:17.749727  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:12:17.749794  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:12:17.760013  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:12:17.781070  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:12:17.781135  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:12:17.794747  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.805862  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:12:17.805938  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.815892  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:12:17.825005  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:12:17.825072  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:12:17.834745  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:12:17.844191  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:17.963586  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.325254  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.361626819s)
	I0729 19:12:19.325289  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.537565  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.611697  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.710291  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:12:19.710408  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.210809  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.711234  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.727361  156414 api_server.go:72] duration metric: took 1.017067714s to wait for apiserver process to appear ...
	I0729 19:12:20.727396  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:12:20.727432  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:20.727961  156414 api_server.go:269] stopped: https://192.168.50.95:8443/healthz: Get "https://192.168.50.95:8443/healthz": dial tcp 192.168.50.95:8443: connect: connection refused
	I0729 19:12:21.228408  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.622048  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.622093  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.622106  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.628174  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.628195  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.728403  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.732920  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:23.732969  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.227954  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.233109  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.233143  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.727641  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.735127  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.735156  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.227686  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.236914  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.236947  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.728500  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.735276  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.735307  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.227824  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.232439  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.232471  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.728521  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.732952  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.732989  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:27.227504  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:27.232166  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:12:27.238505  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:12:27.238531  156414 api_server.go:131] duration metric: took 6.511128129s to wait for apiserver health ...
	I0729 19:12:27.238541  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:27.238560  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:27.240943  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:12:27.242526  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:12:27.254578  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:12:27.274700  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:12:27.289359  156414 system_pods.go:59] 8 kube-system pods found
	I0729 19:12:27.289412  156414 system_pods.go:61] "coredns-7db6d8ff4d-dww2j" [31418aeb-98be-4b43-a687-5c8f64ec5e9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:12:27.289424  156414 system_pods.go:61] "etcd-embed-certs-368536" [0a7aac00-9161-473a-87f1-2702211beaac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:12:27.289443  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [d6638a87-43df-457e-b335-1ab2aa03f421] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:12:27.289469  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [76fefc23-7794-40fc-845a-73d83eb55450] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:12:27.289478  156414 system_pods.go:61] "kube-proxy-9xwt2" [f637605b-9bc2-4922-b801-04f681a81e7c] Running
	I0729 19:12:27.289485  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [476e1d42-6610-44b5-b77b-b25537f044eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:12:27.289495  156414 system_pods.go:61] "metrics-server-569cc877fc-xnkwq" [19328b03-8c7d-499f-9a30-31f023605e49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:12:27.289500  156414 system_pods.go:61] "storage-provisioner" [b049d065-fbb1-48b6-a377-2938a0519c78] Running
	I0729 19:12:27.289513  156414 system_pods.go:74] duration metric: took 14.789212ms to wait for pod list to return data ...
	I0729 19:12:27.289525  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:12:27.292692  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:12:27.292716  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:12:27.292809  156414 node_conditions.go:105] duration metric: took 3.278515ms to run NodePressure ...
	I0729 19:12:27.292825  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:27.563167  156414 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567532  156414 kubeadm.go:739] kubelet initialised
	I0729 19:12:27.567552  156414 kubeadm.go:740] duration metric: took 4.363687ms waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567566  156414 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:12:27.573549  156414 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:29.580511  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:32.080352  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:34.079542  156414 pod_ready.go:92] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:34.079564  156414 pod_ready.go:81] duration metric: took 6.505994438s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:34.079574  156414 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:36.086785  156414 pod_ready.go:102] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:37.089587  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.089611  156414 pod_ready.go:81] duration metric: took 3.010030448s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.089621  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093814  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.093833  156414 pod_ready.go:81] duration metric: took 4.206508ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093842  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097603  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.097621  156414 pod_ready.go:81] duration metric: took 3.772149ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097633  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101696  156414 pod_ready.go:92] pod "kube-proxy-9xwt2" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.101720  156414 pod_ready.go:81] duration metric: took 4.078653ms for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101732  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107711  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.107728  156414 pod_ready.go:81] duration metric: took 5.989066ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107738  156414 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:39.113796  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:41.115401  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:43.614461  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:45.614900  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:48.113738  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:50.114364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:52.114790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:54.613472  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:56.613889  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:59.114206  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:01.614516  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:04.114859  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:06.114993  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:08.615213  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:11.114015  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:13.114342  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:15.614423  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:18.114442  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:20.614465  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:23.117909  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:25.614155  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:28.114746  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:30.613689  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:32.614790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:35.113716  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:37.116362  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:39.614545  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:42.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:44.114654  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:46.114765  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:48.615523  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:51.114096  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:53.114180  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:55.614278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:58.114138  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:00.613679  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:02.614878  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:05.117278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:07.614025  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:09.614681  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:12.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:14.114458  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:16.614364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:19.114533  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:21.613756  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:24.114325  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:26.614276  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:29.114137  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:31.114274  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:33.115749  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:35.614067  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:37.614374  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:39.615618  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:42.114139  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:44.114503  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:46.114624  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:48.613926  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:50.614527  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:53.115129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:55.613563  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:57.615164  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:59.616129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:02.114384  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:04.114621  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:06.114864  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:08.115242  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:10.613949  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:13.115359  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:15.614560  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:17.615109  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:20.114341  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:22.115253  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:24.119792  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:26.614361  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:29.113806  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:31.114150  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:33.614207  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:35.616204  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:38.113264  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:40.615054  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:42.615127  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:45.115119  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:47.613589  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:49.613803  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:51.615235  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:54.113908  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:56.614614  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:59.114193  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:01.614642  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:04.114186  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:06.614156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:08.614216  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:10.615368  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:13.116263  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:15.613987  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:17.614183  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:19.617124  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:22.114156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:24.613643  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:26.613720  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:28.616174  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:31.114289  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:33.114818  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:35.614735  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:37.107998  156414 pod_ready.go:81] duration metric: took 4m0.000241864s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	E0729 19:16:37.108045  156414 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:16:37.108068  156414 pod_ready.go:38] duration metric: took 4m9.540493845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:16:37.108105  156414 kubeadm.go:597] duration metric: took 4m19.465427343s to restartPrimaryControlPlane
	W0729 19:16:37.108167  156414 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:16:37.108196  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:17:08.548650  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.44042578s)
	I0729 19:17:08.548730  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:17:08.564620  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:17:08.575061  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:17:08.585537  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:17:08.585566  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:17:08.585610  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:17:08.594641  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:17:08.594702  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:17:08.604434  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:17:08.613126  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:17:08.613177  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:17:08.622123  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:17:08.630620  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:17:08.630661  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:17:08.640140  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:17:08.648712  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:17:08.648768  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:17:08.658010  156414 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:17:08.709849  156414 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:17:08.709998  156414 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:17:08.850515  156414 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:17:08.850632  156414 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:17:08.850769  156414 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:17:09.057782  156414 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:17:09.059421  156414 out.go:204]   - Generating certificates and keys ...
	I0729 19:17:09.059494  156414 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:17:09.059566  156414 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:17:09.059636  156414 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:17:09.062277  156414 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:17:09.062401  156414 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:17:09.062475  156414 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:17:09.062526  156414 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:17:09.062616  156414 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:17:09.062695  156414 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:17:09.062807  156414 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:17:09.062863  156414 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:17:09.062933  156414 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:17:09.426782  156414 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:17:09.599745  156414 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:17:09.741530  156414 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:17:09.907315  156414 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:17:10.118045  156414 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:17:10.118623  156414 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:17:10.121594  156414 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:17:10.124052  156414 out.go:204]   - Booting up control plane ...
	I0729 19:17:10.124173  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:17:10.124267  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:17:10.124374  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:17:10.144903  156414 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:17:10.145010  156414 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:17:10.145047  156414 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:17:10.278905  156414 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:17:10.279025  156414 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:17:11.280964  156414 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002120381s
	I0729 19:17:11.281070  156414 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:17:15.782460  156414 kubeadm.go:310] [api-check] The API server is healthy after 4.501562605s
	I0729 19:17:15.804614  156414 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:17:15.822230  156414 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:17:15.849613  156414 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:17:15.849870  156414 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-368536 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:17:15.861910  156414 kubeadm.go:310] [bootstrap-token] Using token: zhramo.fqhnhxuylehyq043
	I0729 19:17:15.863215  156414 out.go:204]   - Configuring RBAC rules ...
	I0729 19:17:15.863352  156414 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:17:15.870893  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:17:15.886779  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:17:15.889933  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:17:15.893111  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:17:15.895970  156414 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:17:16.200928  156414 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:17:16.625621  156414 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:17:17.195772  156414 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:17:17.197712  156414 kubeadm.go:310] 
	I0729 19:17:17.197780  156414 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:17:17.197791  156414 kubeadm.go:310] 
	I0729 19:17:17.197874  156414 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:17:17.197885  156414 kubeadm.go:310] 
	I0729 19:17:17.197925  156414 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:17:17.198023  156414 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:17:17.198108  156414 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:17:17.198120  156414 kubeadm.go:310] 
	I0729 19:17:17.198190  156414 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:17:17.198200  156414 kubeadm.go:310] 
	I0729 19:17:17.198258  156414 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:17:17.198267  156414 kubeadm.go:310] 
	I0729 19:17:17.198347  156414 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:17:17.198451  156414 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:17:17.198529  156414 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:17:17.198539  156414 kubeadm.go:310] 
	I0729 19:17:17.198633  156414 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:17:17.198750  156414 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:17:17.198761  156414 kubeadm.go:310] 
	I0729 19:17:17.198895  156414 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zhramo.fqhnhxuylehyq043 \
	I0729 19:17:17.199041  156414 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f \
	I0729 19:17:17.199074  156414 kubeadm.go:310] 	--control-plane 
	I0729 19:17:17.199081  156414 kubeadm.go:310] 
	I0729 19:17:17.199199  156414 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:17:17.199210  156414 kubeadm.go:310] 
	I0729 19:17:17.199327  156414 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zhramo.fqhnhxuylehyq043 \
	I0729 19:17:17.199478  156414 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f 
	I0729 19:17:17.200591  156414 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:17:17.200629  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:17:17.200642  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:17:17.202541  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:17:17.203847  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:17:17.214711  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:17:17.233233  156414 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:17:17.233330  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:17.233332  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-368536 minikube.k8s.io/updated_at=2024_07_29T19_17_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=embed-certs-368536 minikube.k8s.io/primary=true
	I0729 19:17:17.265931  156414 ops.go:34] apiserver oom_adj: -16
	I0729 19:17:17.410594  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:17.911585  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:18.410650  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:18.911432  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:19.411062  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:19.911629  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:20.411050  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:20.911004  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:21.411031  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:21.910787  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:22.411228  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:22.911181  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:23.410624  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:23.910844  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:24.411409  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:24.910745  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:25.410675  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:25.910901  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:26.411562  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:26.911505  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:27.411552  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:27.910916  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:28.410868  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:28.911466  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.410633  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.911613  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.992725  156414 kubeadm.go:1113] duration metric: took 12.75946311s to wait for elevateKubeSystemPrivileges
	I0729 19:17:29.992767  156414 kubeadm.go:394] duration metric: took 5m12.400472687s to StartCluster
	I0729 19:17:29.992793  156414 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:17:29.992902  156414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:17:29.994489  156414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:17:29.994792  156414 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:17:29.994828  156414 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:17:29.994917  156414 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-368536"
	I0729 19:17:29.994954  156414 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-368536"
	I0729 19:17:29.994957  156414 addons.go:69] Setting default-storageclass=true in profile "embed-certs-368536"
	W0729 19:17:29.994966  156414 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:17:29.994969  156414 addons.go:69] Setting metrics-server=true in profile "embed-certs-368536"
	I0729 19:17:29.995004  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:29.995003  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:17:29.995028  156414 addons.go:234] Setting addon metrics-server=true in "embed-certs-368536"
	W0729 19:17:29.995041  156414 addons.go:243] addon metrics-server should already be in state true
	I0729 19:17:29.994986  156414 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-368536"
	I0729 19:17:29.995073  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:29.995409  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995457  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.995460  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995487  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.995487  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995636  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.997279  156414 out.go:177] * Verifying Kubernetes components...
	I0729 19:17:29.998614  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:17:30.011510  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I0729 19:17:30.011717  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0729 19:17:30.011970  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.012063  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.012480  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.012505  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.012626  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.012651  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.012967  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.013105  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.013284  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.013527  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.013574  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.014086  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0729 19:17:30.014502  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.015001  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.015018  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.015505  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.016720  156414 addons.go:234] Setting addon default-storageclass=true in "embed-certs-368536"
	W0729 19:17:30.016740  156414 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:17:30.016770  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:30.017091  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.017123  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.017432  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.017477  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.034798  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0729 19:17:30.035372  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.036179  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.036207  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.037055  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37901
	I0729 19:17:30.037161  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0729 19:17:30.036581  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.037493  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.037581  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.037636  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.038047  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.038056  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.038073  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.038217  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.038403  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.038623  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.038627  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.039185  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.039221  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.040574  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.040687  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.042879  156414 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:17:30.042873  156414 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:17:30.044279  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:17:30.044298  156414 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:17:30.044324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.044544  156414 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:17:30.044593  156414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:17:30.044621  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.048075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048402  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048442  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.048462  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048613  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.048761  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.048845  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.048890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.048914  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.049132  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.049289  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.049306  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.049441  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.049593  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.055718  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34081
	I0729 19:17:30.056086  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.056521  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.056546  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.056931  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.057098  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.058559  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.058795  156414 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:17:30.058810  156414 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:17:30.058825  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.061253  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.061842  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.061880  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.061900  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.062053  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.062195  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.062346  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.192595  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:17:30.208960  156414 node_ready.go:35] waiting up to 6m0s for node "embed-certs-368536" to be "Ready" ...
	I0729 19:17:30.216230  156414 node_ready.go:49] node "embed-certs-368536" has status "Ready":"True"
	I0729 19:17:30.216247  156414 node_ready.go:38] duration metric: took 7.255724ms for node "embed-certs-368536" to be "Ready" ...
	I0729 19:17:30.216256  156414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:17:30.219988  156414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.224074  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.224099  156414 pod_ready.go:81] duration metric: took 4.088257ms for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.224109  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.228389  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.228409  156414 pod_ready.go:81] duration metric: took 4.292723ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.228417  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.233616  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.233634  156414 pod_ready.go:81] duration metric: took 5.212376ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.233642  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.242933  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.242951  156414 pod_ready.go:81] duration metric: took 9.302507ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.242959  156414 pod_ready.go:38] duration metric: took 26.692394ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:17:30.242973  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:17:30.243016  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:17:30.261484  156414 api_server.go:72] duration metric: took 266.652937ms to wait for apiserver process to appear ...
	I0729 19:17:30.261513  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:17:30.261534  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:17:30.269760  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:17:30.270848  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:17:30.270872  156414 api_server.go:131] duration metric: took 9.352433ms to wait for apiserver health ...
	I0729 19:17:30.270880  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:17:30.312744  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:17:30.317547  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:17:30.317570  156414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:17:30.332468  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:17:30.352498  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:17:30.352531  156414 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:17:30.392028  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:17:30.392055  156414 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:17:30.413559  156414 system_pods.go:59] 4 kube-system pods found
	I0729 19:17:30.413586  156414 system_pods.go:61] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:30.413591  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:30.413595  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:30.413598  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:30.413603  156414 system_pods.go:74] duration metric: took 142.71846ms to wait for pod list to return data ...
	I0729 19:17:30.413610  156414 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:17:30.424371  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:17:30.615212  156414 default_sa.go:45] found service account: "default"
	I0729 19:17:30.615237  156414 default_sa.go:55] duration metric: took 201.621467ms for default service account to be created ...
	I0729 19:17:30.615246  156414 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:17:30.831144  156414 system_pods.go:86] 4 kube-system pods found
	I0729 19:17:30.831175  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:30.831182  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:30.831186  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:30.831190  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:30.831210  156414 retry.go:31] will retry after 301.650623ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.127532  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127599  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127595  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127620  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127910  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.127925  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.127935  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127943  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.127974  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.127985  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.127999  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.128008  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.128212  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.128221  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.128440  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.128455  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.128467  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.155504  156414 system_pods.go:86] 8 kube-system pods found
	I0729 19:17:31.155543  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.155559  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.155565  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.155570  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.155575  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.155580  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:17:31.155586  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.155590  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending
	I0729 19:17:31.155606  156414 retry.go:31] will retry after 310.574298ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.159525  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.159546  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.160952  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.160961  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.160976  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.346360  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.346390  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.346700  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.346718  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.346732  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.346742  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.347006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.347052  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.347059  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.347075  156414 addons.go:475] Verifying addon metrics-server=true in "embed-certs-368536"
	I0729 19:17:31.348884  156414 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:17:31.350473  156414 addons.go:510] duration metric: took 1.355642198s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:17:31.473514  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:31.473553  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.473561  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.473567  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.473573  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.473578  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.473583  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:17:31.473587  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.473596  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:31.473605  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:31.473622  156414 retry.go:31] will retry after 446.790872ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.928348  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:31.928381  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.928389  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.928396  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.928401  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.928406  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.928409  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:31.928413  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.928420  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:31.928429  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:31.928444  156414 retry.go:31] will retry after 467.830899ms: missing components: kube-dns
	I0729 19:17:32.403619  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:32.403649  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:32.403659  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:32.403665  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:32.403670  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:32.403676  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:32.403683  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:32.403689  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:32.403697  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:32.403706  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:32.403729  156414 retry.go:31] will retry after 745.010861ms: missing components: kube-dns
	I0729 19:17:33.163660  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:33.163697  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:33.163710  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:33.163719  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:33.163733  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:33.163740  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:33.163746  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:33.163751  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:33.163761  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:33.163770  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Running
	I0729 19:17:33.163791  156414 retry.go:31] will retry after 658.944312ms: missing components: kube-dns
	I0729 19:17:33.830608  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:33.830643  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Running
	I0729 19:17:33.830650  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Running
	I0729 19:17:33.830656  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:33.830662  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:33.830670  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:33.830675  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:33.830682  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:33.830692  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:33.830703  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Running
	I0729 19:17:33.830714  156414 system_pods.go:126] duration metric: took 3.215460876s to wait for k8s-apps to be running ...
	I0729 19:17:33.830726  156414 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:17:33.830824  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:17:33.847810  156414 system_svc.go:56] duration metric: took 17.074145ms WaitForService to wait for kubelet
	I0729 19:17:33.847837  156414 kubeadm.go:582] duration metric: took 3.853011216s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:17:33.847861  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:17:33.850180  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:17:33.850198  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:17:33.850209  156414 node_conditions.go:105] duration metric: took 2.342951ms to run NodePressure ...
	I0729 19:17:33.850221  156414 start.go:241] waiting for startup goroutines ...
	I0729 19:17:33.850230  156414 start.go:246] waiting for cluster config update ...
	I0729 19:17:33.850242  156414 start.go:255] writing updated cluster config ...
	I0729 19:17:33.850512  156414 ssh_runner.go:195] Run: rm -f paused
	I0729 19:17:33.898396  156414 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:17:33.899771  156414 out.go:177] * Done! kubectl is now configured to use "embed-certs-368536" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.048285397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56c797e7-8083-423c-a62e-9a080a399108 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.048481390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c0d04e529681ad3b2d96d9fb1f982535ee7bc1ef1a06f73a71808947a2f58a,PodSandboxId:b27aa50a26aea1055b171f58d7f7034d660c10d5fea3aa02de11cee94a89fc53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279869983313509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60704471-4b2f-4434-97fd-7b84419c8a24,},Annotations:map[string]string{io.kubernetes.container.hash: c60dbb11,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684d8e303d3ed944c2ab32a160cd75c2716a57aebd11538f28436bce0248a81,PodSandboxId:3cda19ea2b809b678c712d512c491429b6cebd1535c32cd820ab30d06aba1791,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869534091468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t4jjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d048fd15-d145-4fc9-8089-55972dfd052e,},Annotations:map[string]string{io.kubernetes.container.hash: 2db1f715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290e296b3fe0b8b28984f3dbede0d158c2d4ce5792e4722a4bfdb063b52b0278,PodSandboxId:b63077d49289f41b892ac67087f984c5b10386bbcb5180462101bf74696a6eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869284172958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vd7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e277e67a-e2b3-4c4c-a945-b46e419365c5,},Annotations:map[string]string{io.kubernetes.container.hash: 58f6aec4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a59b0de6efaa78f0541e0afbe618e250474f605d2f01a72c0fa968931ea9555,PodSandboxId:6752afc5607184af3635daa9d9defb5e3ed0f34f2db225b3a0ebcc74e7928883,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722279869008834295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9da3b5-0010-48fe-b349-6880fdd5404f,},Annotations:map[string]string{io.kubernetes.container.hash: 82601ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3903a83fea545f12af1e31df128b43986f1e4f43bcc4f64cc7850bf2c6fdd95,PodSandboxId:e914858338e32df94dc1457bab74e692c8c4eb0b68643d013725003e08a794fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722279849
469637536,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f951d43ab98b43cec81bd4e25c35a8b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dee6822734aba300b98d475602db7ed7f5ed88ad559dc847eb74fae4ff90c54,PodSandboxId:82515bbf9d419412d248b3215e2999e23f27450e34f192fc73adf227fc4e3e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Crea
tedAt:1722279849374721700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64814ead93f163f1c678d369cede4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd7f00e83588b3df64a5bf3c2a1fbd318a08b41926d3b4c4dad87656ca1bfc9,PodSandboxId:b470e0883ff059cfa22c9497b26ea020590a11a8d71eae1c8d41ff1ffaf5757b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Create
dAt:1722279849387772112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c761942700d84b93bcf0430e804257ea9c7618b8b0a62e64f28919db7f2c63fc,PodSandboxId:73dd6a50b62ae6f898d509ccf66bf4339b1030fcf981da0f8b98d91358c188a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17222798
49298410768,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36386695c0c8b1d1a591dbcf3cd8518f,},Annotations:map[string]string{io.kubernetes.container.hash: 700a48ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18090bf0aba3683f313f3a6464d1c6a5903552e91110ee4ada32637a0a180b5,PodSandboxId:269536abda6f94b4823bd59c9e9043f0ea3ed4750de50933a0222a570427059f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279559234895967,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56c797e7-8083-423c-a62e-9a080a399108 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.080370009Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=e45d90f5-a8a9-465e-b96e-db72353d09cf name=/runtime.v1.RuntimeService/Status
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.080462669Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e45d90f5-a8a9-465e-b96e-db72353d09cf name=/runtime.v1.RuntimeService/Status
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.085704038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35d5a040-7ef3-403b-bd31-8c50e7f84191 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.085768373Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35d5a040-7ef3-403b-bd31-8c50e7f84191 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.087362096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0185e040-4743-4df9-978f-aace19a8e41e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.088427463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280840088392318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0185e040-4743-4df9-978f-aace19a8e41e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.090190358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79eb8b22-d666-4d3a-8618-62a7542a007c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.090247067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79eb8b22-d666-4d3a-8618-62a7542a007c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.090816370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c0d04e529681ad3b2d96d9fb1f982535ee7bc1ef1a06f73a71808947a2f58a,PodSandboxId:b27aa50a26aea1055b171f58d7f7034d660c10d5fea3aa02de11cee94a89fc53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279869983313509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60704471-4b2f-4434-97fd-7b84419c8a24,},Annotations:map[string]string{io.kubernetes.container.hash: c60dbb11,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684d8e303d3ed944c2ab32a160cd75c2716a57aebd11538f28436bce0248a81,PodSandboxId:3cda19ea2b809b678c712d512c491429b6cebd1535c32cd820ab30d06aba1791,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869534091468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t4jjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d048fd15-d145-4fc9-8089-55972dfd052e,},Annotations:map[string]string{io.kubernetes.container.hash: 2db1f715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290e296b3fe0b8b28984f3dbede0d158c2d4ce5792e4722a4bfdb063b52b0278,PodSandboxId:b63077d49289f41b892ac67087f984c5b10386bbcb5180462101bf74696a6eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869284172958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vd7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e277e67a-e2b3-4c4c-a945-b46e419365c5,},Annotations:map[string]string{io.kubernetes.container.hash: 58f6aec4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a59b0de6efaa78f0541e0afbe618e250474f605d2f01a72c0fa968931ea9555,PodSandboxId:6752afc5607184af3635daa9d9defb5e3ed0f34f2db225b3a0ebcc74e7928883,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722279869008834295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9da3b5-0010-48fe-b349-6880fdd5404f,},Annotations:map[string]string{io.kubernetes.container.hash: 82601ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3903a83fea545f12af1e31df128b43986f1e4f43bcc4f64cc7850bf2c6fdd95,PodSandboxId:e914858338e32df94dc1457bab74e692c8c4eb0b68643d013725003e08a794fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722279849
469637536,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f951d43ab98b43cec81bd4e25c35a8b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dee6822734aba300b98d475602db7ed7f5ed88ad559dc847eb74fae4ff90c54,PodSandboxId:82515bbf9d419412d248b3215e2999e23f27450e34f192fc73adf227fc4e3e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Crea
tedAt:1722279849374721700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64814ead93f163f1c678d369cede4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd7f00e83588b3df64a5bf3c2a1fbd318a08b41926d3b4c4dad87656ca1bfc9,PodSandboxId:b470e0883ff059cfa22c9497b26ea020590a11a8d71eae1c8d41ff1ffaf5757b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Create
dAt:1722279849387772112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c761942700d84b93bcf0430e804257ea9c7618b8b0a62e64f28919db7f2c63fc,PodSandboxId:73dd6a50b62ae6f898d509ccf66bf4339b1030fcf981da0f8b98d91358c188a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17222798
49298410768,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36386695c0c8b1d1a591dbcf3cd8518f,},Annotations:map[string]string{io.kubernetes.container.hash: 700a48ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18090bf0aba3683f313f3a6464d1c6a5903552e91110ee4ada32637a0a180b5,PodSandboxId:269536abda6f94b4823bd59c9e9043f0ea3ed4750de50933a0222a570427059f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279559234895967,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79eb8b22-d666-4d3a-8618-62a7542a007c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.127545873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df5acf9d-8825-44d0-bc5f-ce0f6a1dc049 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.127637172Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df5acf9d-8825-44d0-bc5f-ce0f6a1dc049 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.129030054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8109e818-e2d6-4fe7-b721-1bd261b6537a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.129556587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280840129450383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8109e818-e2d6-4fe7-b721-1bd261b6537a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.130037566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e174e16-062d-4ea8-aec2-b5df73876d53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.130090254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e174e16-062d-4ea8-aec2-b5df73876d53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.130266634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c0d04e529681ad3b2d96d9fb1f982535ee7bc1ef1a06f73a71808947a2f58a,PodSandboxId:b27aa50a26aea1055b171f58d7f7034d660c10d5fea3aa02de11cee94a89fc53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279869983313509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60704471-4b2f-4434-97fd-7b84419c8a24,},Annotations:map[string]string{io.kubernetes.container.hash: c60dbb11,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684d8e303d3ed944c2ab32a160cd75c2716a57aebd11538f28436bce0248a81,PodSandboxId:3cda19ea2b809b678c712d512c491429b6cebd1535c32cd820ab30d06aba1791,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869534091468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t4jjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d048fd15-d145-4fc9-8089-55972dfd052e,},Annotations:map[string]string{io.kubernetes.container.hash: 2db1f715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290e296b3fe0b8b28984f3dbede0d158c2d4ce5792e4722a4bfdb063b52b0278,PodSandboxId:b63077d49289f41b892ac67087f984c5b10386bbcb5180462101bf74696a6eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869284172958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vd7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e277e67a-e2b3-4c4c-a945-b46e419365c5,},Annotations:map[string]string{io.kubernetes.container.hash: 58f6aec4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a59b0de6efaa78f0541e0afbe618e250474f605d2f01a72c0fa968931ea9555,PodSandboxId:6752afc5607184af3635daa9d9defb5e3ed0f34f2db225b3a0ebcc74e7928883,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722279869008834295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9da3b5-0010-48fe-b349-6880fdd5404f,},Annotations:map[string]string{io.kubernetes.container.hash: 82601ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3903a83fea545f12af1e31df128b43986f1e4f43bcc4f64cc7850bf2c6fdd95,PodSandboxId:e914858338e32df94dc1457bab74e692c8c4eb0b68643d013725003e08a794fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722279849
469637536,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f951d43ab98b43cec81bd4e25c35a8b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dee6822734aba300b98d475602db7ed7f5ed88ad559dc847eb74fae4ff90c54,PodSandboxId:82515bbf9d419412d248b3215e2999e23f27450e34f192fc73adf227fc4e3e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Crea
tedAt:1722279849374721700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64814ead93f163f1c678d369cede4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd7f00e83588b3df64a5bf3c2a1fbd318a08b41926d3b4c4dad87656ca1bfc9,PodSandboxId:b470e0883ff059cfa22c9497b26ea020590a11a8d71eae1c8d41ff1ffaf5757b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Create
dAt:1722279849387772112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c761942700d84b93bcf0430e804257ea9c7618b8b0a62e64f28919db7f2c63fc,PodSandboxId:73dd6a50b62ae6f898d509ccf66bf4339b1030fcf981da0f8b98d91358c188a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17222798
49298410768,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36386695c0c8b1d1a591dbcf3cd8518f,},Annotations:map[string]string{io.kubernetes.container.hash: 700a48ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18090bf0aba3683f313f3a6464d1c6a5903552e91110ee4ada32637a0a180b5,PodSandboxId:269536abda6f94b4823bd59c9e9043f0ea3ed4750de50933a0222a570427059f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279559234895967,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e174e16-062d-4ea8-aec2-b5df73876d53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.163793716Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a5355b5-3b06-4fad-b1da-2d67571c9d15 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.163859803Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a5355b5-3b06-4fad-b1da-2d67571c9d15 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.166087037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=114f1292-415b-4427-ab9a-027316c1d2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.166800332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280840166778676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=114f1292-415b-4427-ab9a-027316c1d2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.167649680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d79d19c-ffed-4c50-889a-f34c3d7d0089 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.167698488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d79d19c-ffed-4c50-889a-f34c3d7d0089 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:40 default-k8s-diff-port-612270 crio[729]: time="2024-07-29 19:20:40.167873409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1c0d04e529681ad3b2d96d9fb1f982535ee7bc1ef1a06f73a71808947a2f58a,PodSandboxId:b27aa50a26aea1055b171f58d7f7034d660c10d5fea3aa02de11cee94a89fc53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279869983313509,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60704471-4b2f-4434-97fd-7b84419c8a24,},Annotations:map[string]string{io.kubernetes.container.hash: c60dbb11,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9684d8e303d3ed944c2ab32a160cd75c2716a57aebd11538f28436bce0248a81,PodSandboxId:3cda19ea2b809b678c712d512c491429b6cebd1535c32cd820ab30d06aba1791,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869534091468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t4jjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d048fd15-d145-4fc9-8089-55972dfd052e,},Annotations:map[string]string{io.kubernetes.container.hash: 2db1f715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290e296b3fe0b8b28984f3dbede0d158c2d4ce5792e4722a4bfdb063b52b0278,PodSandboxId:b63077d49289f41b892ac67087f984c5b10386bbcb5180462101bf74696a6eb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279869284172958,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vd7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e277e67a-e2b3-4c4c-a945-b46e419365c5,},Annotations:map[string]string{io.kubernetes.container.hash: 58f6aec4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a59b0de6efaa78f0541e0afbe618e250474f605d2f01a72c0fa968931ea9555,PodSandboxId:6752afc5607184af3635daa9d9defb5e3ed0f34f2db225b3a0ebcc74e7928883,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1722279869008834295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pgk2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9da3b5-0010-48fe-b349-6880fdd5404f,},Annotations:map[string]string{io.kubernetes.container.hash: 82601ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3903a83fea545f12af1e31df128b43986f1e4f43bcc4f64cc7850bf2c6fdd95,PodSandboxId:e914858338e32df94dc1457bab74e692c8c4eb0b68643d013725003e08a794fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722279849
469637536,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f951d43ab98b43cec81bd4e25c35a8b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dee6822734aba300b98d475602db7ed7f5ed88ad559dc847eb74fae4ff90c54,PodSandboxId:82515bbf9d419412d248b3215e2999e23f27450e34f192fc73adf227fc4e3e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Crea
tedAt:1722279849374721700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64814ead93f163f1c678d369cede4d19,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd7f00e83588b3df64a5bf3c2a1fbd318a08b41926d3b4c4dad87656ca1bfc9,PodSandboxId:b470e0883ff059cfa22c9497b26ea020590a11a8d71eae1c8d41ff1ffaf5757b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Create
dAt:1722279849387772112,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c761942700d84b93bcf0430e804257ea9c7618b8b0a62e64f28919db7f2c63fc,PodSandboxId:73dd6a50b62ae6f898d509ccf66bf4339b1030fcf981da0f8b98d91358c188a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17222798
49298410768,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36386695c0c8b1d1a591dbcf3cd8518f,},Annotations:map[string]string{io.kubernetes.container.hash: 700a48ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18090bf0aba3683f313f3a6464d1c6a5903552e91110ee4ada32637a0a180b5,PodSandboxId:269536abda6f94b4823bd59c9e9043f0ea3ed4750de50933a0222a570427059f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279559234895967,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-612270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46564e7237ffe15a29374acabc3bb6cf,},Annotations:map[string]string{io.kubernetes.container.hash: af56a501,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d79d19c-ffed-4c50-889a-f34c3d7d0089 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1c0d04e52968       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   b27aa50a26aea       storage-provisioner
	9684d8e303d3e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   3cda19ea2b809       coredns-7db6d8ff4d-t4jjm
	290e296b3fe0b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   b63077d49289f       coredns-7db6d8ff4d-vd7lb
	6a59b0de6efaa       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   6752afc560718       kube-proxy-2pgk2
	a3903a83fea54       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   16 minutes ago      Running             kube-controller-manager   2                   e914858338e32       kube-controller-manager-default-k8s-diff-port-612270
	7cd7f00e83588       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   16 minutes ago      Running             kube-apiserver            2                   b470e0883ff05       kube-apiserver-default-k8s-diff-port-612270
	9dee6822734ab       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   16 minutes ago      Running             kube-scheduler            2                   82515bbf9d419       kube-scheduler-default-k8s-diff-port-612270
	c761942700d84       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   73dd6a50b62ae       etcd-default-k8s-diff-port-612270
	c18090bf0aba3       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 minutes ago      Exited              kube-apiserver            1                   269536abda6f9       kube-apiserver-default-k8s-diff-port-612270
	
	
	==> coredns [290e296b3fe0b8b28984f3dbede0d158c2d4ce5792e4722a4bfdb063b52b0278] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9684d8e303d3ed944c2ab32a160cd75c2716a57aebd11538f28436bce0248a81] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-612270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-612270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=default-k8s-diff-port-612270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_04_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:04:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-612270
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:20:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:19:53 +0000   Mon, 29 Jul 2024 19:04:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:19:53 +0000   Mon, 29 Jul 2024 19:04:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:19:53 +0000   Mon, 29 Jul 2024 19:04:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:19:53 +0000   Mon, 29 Jul 2024 19:04:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    default-k8s-diff-port-612270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f483107da7d464ca4baff73fe22ae90
	  System UUID:                4f483107-da7d-464c-a4ba-ff73fe22ae90
	  Boot ID:                    1625d9c3-7936-4519-a4ab-ca4b848415f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-t4jjm                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-vd7lb                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-612270                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-612270             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-612270    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-2pgk2                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-612270             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-dfkzq                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-612270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-612270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-612270 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-612270 event: Registered Node default-k8s-diff-port-612270 in Controller
	
	
	==> dmesg <==
	[  +0.039314] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.736303] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul29 18:59] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.572887] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.765017] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.059376] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.048963] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.211828] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.119854] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.296714] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.286695] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.060638] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.090069] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +4.579977] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.342160] kauditd_printk_skb: 50 callbacks suppressed
	[  +8.492011] kauditd_printk_skb: 27 callbacks suppressed
	[Jul29 19:04] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.283564] systemd-fstab-generator[3604]: Ignoring "noauto" option for root device
	[  +4.334143] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.719478] systemd-fstab-generator[3925]: Ignoring "noauto" option for root device
	[ +13.413557] systemd-fstab-generator[4117]: Ignoring "noauto" option for root device
	[  +0.086050] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 19:05] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [c761942700d84b93bcf0430e804257ea9c7618b8b0a62e64f28919db7f2c63fc] <==
	{"level":"info","ts":"2024-07-29T19:04:10.496254Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:04:10.497557Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:04:10.522573Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:04:10.52265Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:04:10.522957Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.152:2379"}
	{"level":"info","ts":"2024-07-29T19:04:10.523096Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ce072c4559d5992c","local-member-id":"900c4b71f7b778f3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:04:10.523184Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:04:10.523226Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:04:10.525148Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-29T19:12:19.850572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"963.975055ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8715469132556829442 > lease_revoke:<id:78f390ffe0ee82b9>","response":"size:29"}
	{"level":"info","ts":"2024-07-29T19:12:19.850783Z","caller":"traceutil/trace.go:171","msg":"trace[12790753] linearizableReadLoop","detail":"{readStateIndex:940; appliedIndex:939; }","duration":"1.119661062s","start":"2024-07-29T19:12:18.73108Z","end":"2024-07-29T19:12:19.850741Z","steps":["trace[12790753] 'read index received'  (duration: 155.343049ms)","trace[12790753] 'applied index is now lower than readState.Index'  (duration: 964.316943ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:12:19.85092Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.119803455s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.850936Z","caller":"traceutil/trace.go:171","msg":"trace[1495363539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:831; }","duration":"1.119874546s","start":"2024-07-29T19:12:18.731056Z","end":"2024-07-29T19:12:19.850931Z","steps":["trace[1495363539] 'agreement among raft nodes before linearized reading'  (duration: 1.119804237s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.850961Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:18.731043Z","time spent":"1.119907123s","remote":"127.0.0.1:47888","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-29T19:12:19.851155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"421.020826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.85146Z","caller":"traceutil/trace.go:171","msg":"trace[937393213] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:831; }","duration":"421.353864ms","start":"2024-07-29T19:12:19.430093Z","end":"2024-07-29T19:12:19.851447Z","steps":["trace[937393213] 'agreement among raft nodes before linearized reading'  (duration: 421.020953ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.851594Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:19.430079Z","time spent":"421.505209ms","remote":"127.0.0.1:48126","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":29,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-07-29T19:12:19.851208Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.14725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.851735Z","caller":"traceutil/trace.go:171","msg":"trace[1678992308] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:831; }","duration":"109.704771ms","start":"2024-07-29T19:12:19.742023Z","end":"2024-07-29T19:12:19.851728Z","steps":["trace[1678992308] 'agreement among raft nodes before linearized reading'  (duration: 109.160813ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T19:14:10.5787Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2024-07-29T19:14:10.58716Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":677,"took":"8.10319ms","hash":1415730995,"current-db-size-bytes":2244608,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2244608,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-29T19:14:10.587213Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1415730995,"revision":677,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T19:19:10.588118Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":920}
	{"level":"info","ts":"2024-07-29T19:19:10.592082Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":920,"took":"3.578267ms","hash":1081082201,"current-db-size-bytes":2244608,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T19:19:10.592131Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1081082201,"revision":920,"compact-revision":677}
	
	
	==> kernel <==
	 19:20:40 up 21 min,  0 users,  load average: 0.06, 0.22, 0.20
	Linux default-k8s-diff-port-612270 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7cd7f00e83588b3df64a5bf3c2a1fbd318a08b41926d3b4c4dad87656ca1bfc9] <==
	I0729 19:15:12.983575       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:17:12.982384       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:17:12.982792       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:17:12.982827       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:17:12.983839       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:17:12.983906       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:17:12.983914       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:19:11.987753       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:19:11.988068       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 19:19:12.988610       1 handler_proxy.go:93] no RequestInfo found in the context
	W0729 19:19:12.988611       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:19:12.988900       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:19:12.988930       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0729 19:19:12.988991       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:19:12.990256       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:20:12.990112       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:20:12.990405       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:20:12.990437       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:20:12.990477       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:20:12.990599       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:20:12.991801       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [c18090bf0aba3683f313f3a6464d1c6a5903552e91110ee4ada32637a0a180b5] <==
	W0729 19:04:05.496063       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.542692       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.580344       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.687403       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.728691       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.744044       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.768954       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.810605       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.870371       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.875025       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.914618       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.935630       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:05.956726       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.004775       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.017902       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.070975       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.079972       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.089969       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.093579       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.213331       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.288407       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.753251       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.753251       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.902542       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:06.939755       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a3903a83fea545f12af1e31df128b43986f1e4f43bcc4f64cc7850bf2c6fdd95] <==
	I0729 19:14:57.942915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:15:27.481458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:15:27.951639       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:15:32.821343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="216.877µs"
	I0729 19:15:44.817229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="106.602µs"
	E0729 19:15:57.486541       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:15:57.960307       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:16:27.492907       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:16:27.968395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:16:57.500203       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:16:57.978313       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:17:27.509365       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:17:27.986486       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:17:57.515082       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:17:57.994312       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:18:27.521909       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:18:28.003587       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:18:57.528864       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:18:58.011440       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:19:27.533598       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:19:28.019074       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:19:57.538853       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:19:58.026871       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:20:27.545070       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:20:28.035483       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6a59b0de6efaa78f0541e0afbe618e250474f605d2f01a72c0fa968931ea9555] <==
	I0729 19:04:29.572466       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:04:29.602948       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	I0729 19:04:29.966301       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:04:29.973643       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:04:29.973713       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:04:30.013736       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:04:30.016875       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:04:30.019686       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:04:30.021125       1 config.go:192] "Starting service config controller"
	I0729 19:04:30.021655       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:04:30.021743       1 config.go:319] "Starting node config controller"
	I0729 19:04:30.021765       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:04:30.026088       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:04:30.026115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:04:30.122693       1 shared_informer.go:320] Caches are synced for node config
	I0729 19:04:30.122739       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:04:30.127205       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9dee6822734aba300b98d475602db7ed7f5ed88ad559dc847eb74fae4ff90c54] <==
	W0729 19:04:11.997363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 19:04:11.997391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 19:04:11.997442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:11.997467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 19:04:11.997575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:11.997646       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:04:11.998396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 19:04:11.998568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 19:04:12.827260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:04:12.827350       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 19:04:12.942388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:04:12.942434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 19:04:12.973695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:04:12.973743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 19:04:13.000824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:13.000884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 19:04:13.048352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 19:04:13.048403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 19:04:13.162171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:04:13.162282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 19:04:13.174181       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:13.174257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:04:13.426283       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:04:13.426330       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 19:04:15.876145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:18:14 default-k8s-diff-port-612270 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:18:28 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:18:28.801801    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:18:39 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:18:39.800911    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:18:51 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:18:51.800648    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:19:06 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:19:06.807721    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:19:14 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:19:14.836680    3932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:19:14 default-k8s-diff-port-612270 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:19:14 default-k8s-diff-port-612270 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:19:14 default-k8s-diff-port-612270 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:19:14 default-k8s-diff-port-612270 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:19:18 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:19:18.802998    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:19:29 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:19:29.800570    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:19:40 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:19:40.800595    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:19:54 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:19:54.807332    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:20:08 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:20:08.801836    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:20:14 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:20:14.834414    3932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:20:14 default-k8s-diff-port-612270 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:20:14 default-k8s-diff-port-612270 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:20:14 default-k8s-diff-port-612270 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:20:14 default-k8s-diff-port-612270 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:20:19 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:20:19.800670    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	Jul 29 19:20:33 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:20:33.812396    3932 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 19:20:33 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:20:33.812469    3932 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 19:20:33 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:20:33.812727    3932 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lct7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-dfkzq_kube-system(69798da9-45ca-40cb-b066-c06b2b11b7ea): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 29 19:20:33 default-k8s-diff-port-612270 kubelet[3932]: E0729 19:20:33.812763    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-dfkzq" podUID="69798da9-45ca-40cb-b066-c06b2b11b7ea"
	
	
	==> storage-provisioner [c1c0d04e529681ad3b2d96d9fb1f982535ee7bc1ef1a06f73a71808947a2f58a] <==
	I0729 19:04:30.117437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:04:30.135851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:04:30.135915       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:04:30.147577       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:04:30.147715       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-612270_7b018e85-001f-4428-9071-9f02eaeb6168!
	I0729 19:04:30.148643       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"39252ba2-999a-4c8d-a26c-b086676f7fa3", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-612270_7b018e85-001f-4428-9071-9f02eaeb6168 became leader
	I0729 19:04:30.248361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-612270_7b018e85-001f-4428-9071-9f02eaeb6168!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-612270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-dfkzq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-612270 describe pod metrics-server-569cc877fc-dfkzq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-612270 describe pod metrics-server-569cc877fc-dfkzq: exit status 1 (61.217829ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-dfkzq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-612270 describe pod metrics-server-569cc877fc-dfkzq: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (428.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (371.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-524369 -n no-preload-524369
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 19:20:19.668881148 +0000 UTC m=+6429.639584926
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-524369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-524369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.171µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-524369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-524369 -n no-preload-524369
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-524369 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-524369 logs -n 25: (1.23753519s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-524369                  | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-524369 --memory=2200                     | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC | 29 Jul 24 19:05 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612270       | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 19:04 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-834964             | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-974855                              | cert-expiration-974855       | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:01 UTC |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-453780             | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-453780                  | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-453780 image list                           | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148539 | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | disable-driver-mounts-148539                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-368536            | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC | 29 Jul 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-368536                 | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:07 UTC | 29 Jul 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 19:19 UTC | 29 Jul 24 19:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:07:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:07:08.883432  156414 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:07:08.883769  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883781  156414 out.go:304] Setting ErrFile to fd 2...
	I0729 19:07:08.883788  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883976  156414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 19:07:08.884534  156414 out.go:298] Setting JSON to false
	I0729 19:07:08.885578  156414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13749,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:07:08.885642  156414 start.go:139] virtualization: kvm guest
	I0729 19:07:08.888023  156414 out.go:177] * [embed-certs-368536] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:07:08.889601  156414 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 19:07:08.889601  156414 notify.go:220] Checking for updates...
	I0729 19:07:08.892436  156414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:07:08.893966  156414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:07:08.895257  156414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 19:07:08.896516  156414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:07:08.897746  156414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:07:08.899225  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:07:08.899588  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.899642  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.914943  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0729 19:07:08.915307  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.915885  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.915905  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.916305  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.916486  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.916703  156414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:07:08.917034  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.917074  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.931497  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0729 19:07:08.931857  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.932280  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.932300  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.932640  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.932819  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.968386  156414 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:07:08.969538  156414 start.go:297] selected driver: kvm2
	I0729 19:07:08.969556  156414 start.go:901] validating driver "kvm2" against &{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.969681  156414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:07:08.970358  156414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.970428  156414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:07:08.986808  156414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:07:08.987271  156414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:07:08.987351  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:07:08.987370  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:07:08.987443  156414 start.go:340] cluster config:
	{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.987605  156414 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.989202  156414 out.go:177] * Starting "embed-certs-368536" primary control-plane node in "embed-certs-368536" cluster
	I0729 19:07:08.990452  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:07:08.990496  156414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:07:08.990510  156414 cache.go:56] Caching tarball of preloaded images
	I0729 19:07:08.990604  156414 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:07:08.990618  156414 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:07:08.990746  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:07:08.990949  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:07:08.990994  156414 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:07:08.991013  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:07:08.991023  156414 fix.go:54] fixHost starting: 
	I0729 19:07:08.991314  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.991356  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:09.006100  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 19:07:09.006507  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:09.007034  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:09.007060  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:09.007401  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:09.007594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.007758  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:07:09.009424  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Running err=<nil>
	W0729 19:07:09.009448  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:07:09.011240  156414 out.go:177] * Updating the running kvm2 "embed-certs-368536" VM ...
	I0729 19:07:09.012305  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:07:09.012324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.012506  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:07:09.014924  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015334  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:07:09.015362  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015507  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:07:09.015664  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015796  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:07:09.016106  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:07:09.016288  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:07:09.016300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:07:11.913157  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:14.985074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:21.065092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:24.137179  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:30.217186  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:33.289154  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:41.417004  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:41.417303  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417326  152077 kubeadm.go:310] 
	I0729 19:07:41.417370  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:07:41.417434  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:07:41.417456  152077 kubeadm.go:310] 
	I0729 19:07:41.417514  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:07:41.417586  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:07:41.417720  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:07:41.417732  152077 kubeadm.go:310] 
	I0729 19:07:41.417870  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:07:41.417917  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:07:41.417972  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:07:41.417982  152077 kubeadm.go:310] 
	I0729 19:07:41.418104  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:07:41.418232  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:07:41.418247  152077 kubeadm.go:310] 
	I0729 19:07:41.418370  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:07:41.418477  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:07:41.418596  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:07:41.418696  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:07:41.418706  152077 kubeadm.go:310] 
	I0729 19:07:41.419442  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:07:41.419562  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:07:41.419660  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:07:41.419767  152077 kubeadm.go:394] duration metric: took 8m3.273724985s to StartCluster
	I0729 19:07:41.419853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:07:41.419923  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:07:41.466895  152077 cri.go:89] found id: ""
	I0729 19:07:41.466922  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.466933  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:07:41.466941  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:07:41.467013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:07:41.500836  152077 cri.go:89] found id: ""
	I0729 19:07:41.500876  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.500888  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:07:41.500896  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:07:41.500949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:07:41.534917  152077 cri.go:89] found id: ""
	I0729 19:07:41.534946  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.534958  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:07:41.534968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:07:41.535038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:07:41.570516  152077 cri.go:89] found id: ""
	I0729 19:07:41.570545  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.570556  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:07:41.570565  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:07:41.570640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:07:41.608833  152077 cri.go:89] found id: ""
	I0729 19:07:41.608881  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.608894  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:07:41.608902  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:07:41.608969  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:07:41.644079  152077 cri.go:89] found id: ""
	I0729 19:07:41.644114  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.644127  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:07:41.644136  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:07:41.644198  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:07:41.678991  152077 cri.go:89] found id: ""
	I0729 19:07:41.679019  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.679028  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:07:41.679035  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:07:41.679088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:07:41.722775  152077 cri.go:89] found id: ""
	I0729 19:07:41.722803  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.722815  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:07:41.722829  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:07:41.722857  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:07:41.841614  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:07:41.841641  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:07:41.841658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:07:41.945679  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:07:41.945716  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:07:41.984505  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:07:41.984536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:07:42.037376  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:07:42.037418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:07:42.051259  152077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:07:42.051310  152077 out.go:239] * 
	W0729 19:07:42.051369  152077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.051390  152077 out.go:239] * 
	W0729 19:07:42.052280  152077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:07:42.055255  152077 out.go:177] 
	W0729 19:07:42.056299  152077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.056362  152077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:07:42.056391  152077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:07:42.057745  152077 out.go:177] 
	I0729 19:07:42.413268  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:45.481071  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:51.561079  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:54.633091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:00.713100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:03.785182  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:09.865116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:12.937102  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:19.017094  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:22.093191  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:28.169116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:31.241130  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:37.321134  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:40.393092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:46.473118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:49.545166  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:55.625086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:58.697184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:04.777113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:07.849165  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:13.933065  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:17.001100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:23.081133  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:26.153086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:32.233178  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:35.305183  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:41.385184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:44.461106  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:50.537120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:53.609150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:59.689091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:02.761193  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:08.845074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:11.917067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:17.993090  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:21.065137  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:27.145098  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:30.217175  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:36.301060  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:39.369118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:45.449082  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:48.521097  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:54.601120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:57.673200  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:03.753116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:06.825136  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:12.905195  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:15.977118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:22.057076  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:25.129144  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:31.209150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:34.281164  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:40.365067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:43.437113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:46.437452  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:11:46.437532  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.437865  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:11:46.437902  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.438117  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:11:46.439690  156414 machine.go:97] duration metric: took 4m37.427371174s to provisionDockerMachine
	I0729 19:11:46.439733  156414 fix.go:56] duration metric: took 4m37.448711854s for fixHost
	I0729 19:11:46.439746  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 4m37.448735558s
	W0729 19:11:46.439780  156414 start.go:714] error starting host: provision: host is not running
	W0729 19:11:46.439926  156414 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:11:46.439941  156414 start.go:729] Will try again in 5 seconds ...
	I0729 19:11:51.440155  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:11:51.440266  156414 start.go:364] duration metric: took 69.381µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:11:51.440311  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:11:51.440322  156414 fix.go:54] fixHost starting: 
	I0729 19:11:51.440735  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:11:51.440764  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:11:51.455959  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0729 19:11:51.456443  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:11:51.457053  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:11:51.457078  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:11:51.457475  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:11:51.457721  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:11:51.457885  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:11:51.459531  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Stopped err=<nil>
	I0729 19:11:51.459558  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	W0729 19:11:51.459736  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:11:51.461603  156414 out.go:177] * Restarting existing kvm2 VM for "embed-certs-368536" ...
	I0729 19:11:51.462768  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Start
	I0729 19:11:51.462973  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring networks are active...
	I0729 19:11:51.463679  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network default is active
	I0729 19:11:51.464155  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network mk-embed-certs-368536 is active
	I0729 19:11:51.464571  156414 main.go:141] libmachine: (embed-certs-368536) Getting domain xml...
	I0729 19:11:51.465359  156414 main.go:141] libmachine: (embed-certs-368536) Creating domain...
	I0729 19:11:51.794918  156414 main.go:141] libmachine: (embed-certs-368536) Waiting to get IP...
	I0729 19:11:51.795679  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:51.796159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:51.796221  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:51.796150  157493 retry.go:31] will retry after 235.729444ms: waiting for machine to come up
	I0729 19:11:52.033554  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.034180  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.034207  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.034103  157493 retry.go:31] will retry after 323.595446ms: waiting for machine to come up
	I0729 19:11:52.359640  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.360204  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.360233  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.360143  157493 retry.go:31] will retry after 341.954873ms: waiting for machine to come up
	I0729 19:11:52.703779  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.704350  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.704373  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.704298  157493 retry.go:31] will retry after 385.738264ms: waiting for machine to come up
	I0729 19:11:53.091976  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.092451  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.092484  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.092397  157493 retry.go:31] will retry after 759.811264ms: waiting for machine to come up
	I0729 19:11:53.853241  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.853799  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.853826  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.853755  157493 retry.go:31] will retry after 761.294244ms: waiting for machine to come up
	I0729 19:11:54.616298  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:54.617009  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:54.617039  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:54.616952  157493 retry.go:31] will retry after 1.056936741s: waiting for machine to come up
	I0729 19:11:55.675047  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:55.675491  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:55.675514  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:55.675442  157493 retry.go:31] will retry after 1.232760679s: waiting for machine to come up
	I0729 19:11:56.909745  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:56.910283  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:56.910309  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:56.910228  157493 retry.go:31] will retry after 1.432617964s: waiting for machine to come up
	I0729 19:11:58.344399  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:58.345006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:58.345033  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:58.344957  157493 retry.go:31] will retry after 1.914060621s: waiting for machine to come up
	I0729 19:12:00.262146  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:00.262707  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:00.262739  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:00.262638  157493 retry.go:31] will retry after 2.77447957s: waiting for machine to come up
	I0729 19:12:03.039059  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:03.039693  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:03.039718  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:03.039650  157493 retry.go:31] will retry after 2.755354142s: waiting for machine to come up
	I0729 19:12:05.797890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:05.798279  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:05.798306  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:05.798234  157493 retry.go:31] will retry after 4.501451096s: waiting for machine to come up
	I0729 19:12:10.304116  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304586  156414 main.go:141] libmachine: (embed-certs-368536) Found IP for machine: 192.168.50.95
	I0729 19:12:10.304616  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has current primary IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304626  156414 main.go:141] libmachine: (embed-certs-368536) Reserving static IP address...
	I0729 19:12:10.305096  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.305134  156414 main.go:141] libmachine: (embed-certs-368536) DBG | skip adding static IP to network mk-embed-certs-368536 - found existing host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"}
	I0729 19:12:10.305151  156414 main.go:141] libmachine: (embed-certs-368536) Reserved static IP address: 192.168.50.95
	I0729 19:12:10.305166  156414 main.go:141] libmachine: (embed-certs-368536) Waiting for SSH to be available...
	I0729 19:12:10.305184  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Getting to WaitForSSH function...
	I0729 19:12:10.307568  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.307936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.307972  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.308079  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH client type: external
	I0729 19:12:10.308110  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa (-rw-------)
	I0729 19:12:10.308140  156414 main.go:141] libmachine: (embed-certs-368536) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:12:10.308157  156414 main.go:141] libmachine: (embed-certs-368536) DBG | About to run SSH command:
	I0729 19:12:10.308170  156414 main.go:141] libmachine: (embed-certs-368536) DBG | exit 0
	I0729 19:12:10.436958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | SSH cmd err, output: <nil>: 
	I0729 19:12:10.437313  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetConfigRaw
	I0729 19:12:10.437962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.440164  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440520  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.440545  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440930  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:12:10.441184  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:12:10.441207  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:10.441430  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.443802  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444155  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.444187  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444375  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.444541  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444719  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444897  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.445064  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.445289  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.445300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:12:10.557371  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:12:10.557405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557675  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:12:10.557706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557905  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.560444  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.560793  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.560819  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.561027  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.561249  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561442  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.561783  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.561990  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.562006  156414 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-368536 && echo "embed-certs-368536" | sudo tee /etc/hostname
	I0729 19:12:10.688844  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-368536
	
	I0729 19:12:10.688891  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.691701  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.692102  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692293  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.692468  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692660  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692821  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.693024  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.693222  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.693245  156414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-368536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-368536/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-368536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:12:10.814346  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:12:10.814376  156414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 19:12:10.814400  156414 buildroot.go:174] setting up certificates
	I0729 19:12:10.814409  156414 provision.go:84] configureAuth start
	I0729 19:12:10.814417  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.814706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.817503  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.817859  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.817886  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.818025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.820077  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820443  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.820473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820617  156414 provision.go:143] copyHostCerts
	I0729 19:12:10.820703  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 19:12:10.820713  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 19:12:10.820781  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 19:12:10.820905  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 19:12:10.820914  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 19:12:10.820943  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 19:12:10.821005  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 19:12:10.821013  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 19:12:10.821041  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 19:12:10.821115  156414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.embed-certs-368536 san=[127.0.0.1 192.168.50.95 embed-certs-368536 localhost minikube]
	I0729 19:12:10.867595  156414 provision.go:177] copyRemoteCerts
	I0729 19:12:10.867661  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:12:10.867685  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.870501  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.870857  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.870881  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.871063  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.871267  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.871428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.871595  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:10.956269  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 19:12:10.981554  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:12:11.006055  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:12:11.031709  156414 provision.go:87] duration metric: took 217.286178ms to configureAuth
	I0729 19:12:11.031746  156414 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:12:11.031924  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:12:11.032088  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.034772  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035129  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.035159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.035583  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035737  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035863  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.036016  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.036203  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.036218  156414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:12:11.321782  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:12:11.321810  156414 machine.go:97] duration metric: took 880.608535ms to provisionDockerMachine
	I0729 19:12:11.321823  156414 start.go:293] postStartSetup for "embed-certs-368536" (driver="kvm2")
	I0729 19:12:11.321834  156414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:12:11.321850  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.322243  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:12:11.322281  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.325081  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325437  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.325459  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325612  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.325841  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.326025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.326177  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.412548  156414 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:12:11.417360  156414 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:12:11.417386  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 19:12:11.417466  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 19:12:11.417538  156414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 19:12:11.417632  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:12:11.429176  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:11.457333  156414 start.go:296] duration metric: took 135.490005ms for postStartSetup
	I0729 19:12:11.457401  156414 fix.go:56] duration metric: took 20.017075153s for fixHost
	I0729 19:12:11.457428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.460243  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460653  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.460702  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460873  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.461060  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461233  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461408  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.461586  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.461763  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.461779  156414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:12:11.573697  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722280331.531987438
	
	I0729 19:12:11.573724  156414 fix.go:216] guest clock: 1722280331.531987438
	I0729 19:12:11.573735  156414 fix.go:229] Guest: 2024-07-29 19:12:11.531987438 +0000 UTC Remote: 2024-07-29 19:12:11.457406225 +0000 UTC m=+302.608153452 (delta=74.581213ms)
	I0729 19:12:11.573758  156414 fix.go:200] guest clock delta is within tolerance: 74.581213ms
	I0729 19:12:11.573763  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 20.133485011s
	I0729 19:12:11.573782  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.574056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:11.576405  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576760  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.576798  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576988  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577479  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577644  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577737  156414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:12:11.577799  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.577840  156414 ssh_runner.go:195] Run: cat /version.json
	I0729 19:12:11.577869  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.580473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580564  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580840  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580921  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.581056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581203  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581358  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581369  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581554  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581564  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581669  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.581732  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.662254  156414 ssh_runner.go:195] Run: systemctl --version
	I0729 19:12:11.688214  156414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:12:11.838322  156414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:12:11.844435  156414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:12:11.844521  156414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:12:11.859899  156414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:12:11.859923  156414 start.go:495] detecting cgroup driver to use...
	I0729 19:12:11.859990  156414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:12:11.876768  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:12:11.890508  156414 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:12:11.890584  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:12:11.904817  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:12:11.919251  156414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:12:12.053205  156414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:12:12.205975  156414 docker.go:233] disabling docker service ...
	I0729 19:12:12.206041  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:12:12.222129  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:12:12.235288  156414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:12:12.386940  156414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:12:12.503688  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:12:12.518539  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:12:12.538744  156414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:12:12.538805  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.549615  156414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:12:12.549683  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.560565  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.571814  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.582852  156414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:12:12.595237  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.607232  156414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.626183  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.637202  156414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:12:12.647141  156414 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:12:12.647204  156414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:12:12.661099  156414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:12:12.671936  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:12.795418  156414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:12:12.934125  156414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:12:12.934220  156414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:12:12.939058  156414 start.go:563] Will wait 60s for crictl version
	I0729 19:12:12.939123  156414 ssh_runner.go:195] Run: which crictl
	I0729 19:12:12.943972  156414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:12:12.990564  156414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:12:12.990672  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.018852  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.053593  156414 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:12:13.054890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:13.057601  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.057994  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:13.058025  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.058229  156414 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:12:13.062303  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:13.075815  156414 kubeadm.go:883] updating cluster {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:12:13.075989  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:12:13.076042  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:13.112073  156414 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:12:13.112136  156414 ssh_runner.go:195] Run: which lz4
	I0729 19:12:13.116209  156414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:12:13.120509  156414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:12:13.120545  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:12:14.521429  156414 crio.go:462] duration metric: took 1.405252765s to copy over tarball
	I0729 19:12:14.521516  156414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:12:16.740086  156414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218529852s)
	I0729 19:12:16.740129  156414 crio.go:469] duration metric: took 2.218665992s to extract the tarball
	I0729 19:12:16.740140  156414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:12:16.779302  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:16.825787  156414 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:12:16.825823  156414 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:12:16.825832  156414 kubeadm.go:934] updating node { 192.168.50.95 8443 v1.30.3 crio true true} ...
	I0729 19:12:16.825972  156414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-368536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:12:16.826034  156414 ssh_runner.go:195] Run: crio config
	I0729 19:12:16.873701  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:16.873734  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:16.873752  156414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:12:16.873776  156414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-368536 NodeName:embed-certs-368536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:12:16.873923  156414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-368536"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:12:16.873987  156414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:12:16.884427  156414 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:12:16.884530  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:12:16.895097  156414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 19:12:16.914112  156414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:12:16.931797  156414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 19:12:16.949824  156414 ssh_runner.go:195] Run: grep 192.168.50.95	control-plane.minikube.internal$ /etc/hosts
	I0729 19:12:16.953765  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:16.967138  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:17.088789  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:12:17.108577  156414 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536 for IP: 192.168.50.95
	I0729 19:12:17.108601  156414 certs.go:194] generating shared ca certs ...
	I0729 19:12:17.108623  156414 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:12:17.108831  156414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 19:12:17.109076  156414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 19:12:17.109118  156414 certs.go:256] generating profile certs ...
	I0729 19:12:17.109296  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/client.key
	I0729 19:12:17.109394  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key.77ca8755
	I0729 19:12:17.109448  156414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key
	I0729 19:12:17.109619  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 19:12:17.109651  156414 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 19:12:17.109661  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 19:12:17.109688  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 19:12:17.109722  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:12:17.109756  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 19:12:17.109819  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:17.110926  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:12:17.142003  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 19:12:17.176798  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:12:17.204272  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 19:12:17.244558  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:12:17.281935  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:12:17.306456  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:12:17.332576  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:12:17.357372  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 19:12:17.380363  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 19:12:17.403737  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:12:17.428493  156414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:12:17.445767  156414 ssh_runner.go:195] Run: openssl version
	I0729 19:12:17.451514  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 19:12:17.463663  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469467  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469523  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.475754  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:12:17.487617  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:12:17.498699  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503205  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503257  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.508880  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:12:17.519570  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 19:12:17.530630  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535004  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535046  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.540647  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 19:12:17.551719  156414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:12:17.556199  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:12:17.562469  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:12:17.568561  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:12:17.574653  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:12:17.580458  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:12:17.586392  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:12:17.592304  156414 kubeadm.go:392] StartCluster: {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:12:17.592422  156414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:12:17.592467  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.631628  156414 cri.go:89] found id: ""
	I0729 19:12:17.631701  156414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:12:17.642636  156414 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:12:17.642665  156414 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:12:17.642731  156414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:12:17.652551  156414 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:12:17.653603  156414 kubeconfig.go:125] found "embed-certs-368536" server: "https://192.168.50.95:8443"
	I0729 19:12:17.656022  156414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:12:17.666987  156414 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.95
	I0729 19:12:17.667014  156414 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:12:17.667026  156414 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:12:17.667065  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.709534  156414 cri.go:89] found id: ""
	I0729 19:12:17.709598  156414 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:12:17.729709  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:12:17.739968  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:12:17.739990  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:12:17.740051  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:12:17.749727  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:12:17.749794  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:12:17.760013  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:12:17.781070  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:12:17.781135  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:12:17.794747  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.805862  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:12:17.805938  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.815892  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:12:17.825005  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:12:17.825072  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:12:17.834745  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:12:17.844191  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:17.963586  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.325254  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.361626819s)
	I0729 19:12:19.325289  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.537565  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.611697  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.710291  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:12:19.710408  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.210809  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.711234  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.727361  156414 api_server.go:72] duration metric: took 1.017067714s to wait for apiserver process to appear ...
	I0729 19:12:20.727396  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:12:20.727432  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:20.727961  156414 api_server.go:269] stopped: https://192.168.50.95:8443/healthz: Get "https://192.168.50.95:8443/healthz": dial tcp 192.168.50.95:8443: connect: connection refused
	I0729 19:12:21.228408  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.622048  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.622093  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.622106  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.628174  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.628195  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.728403  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.732920  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:23.732969  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.227954  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.233109  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.233143  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.727641  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.735127  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.735156  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.227686  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.236914  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.236947  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.728500  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.735276  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.735307  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.227824  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.232439  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.232471  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.728521  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.732952  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.732989  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:27.227504  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:27.232166  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:12:27.238505  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:12:27.238531  156414 api_server.go:131] duration metric: took 6.511128129s to wait for apiserver health ...
	I0729 19:12:27.238541  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:27.238560  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:27.240943  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:12:27.242526  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:12:27.254578  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:12:27.274700  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:12:27.289359  156414 system_pods.go:59] 8 kube-system pods found
	I0729 19:12:27.289412  156414 system_pods.go:61] "coredns-7db6d8ff4d-dww2j" [31418aeb-98be-4b43-a687-5c8f64ec5e9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:12:27.289424  156414 system_pods.go:61] "etcd-embed-certs-368536" [0a7aac00-9161-473a-87f1-2702211beaac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:12:27.289443  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [d6638a87-43df-457e-b335-1ab2aa03f421] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:12:27.289469  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [76fefc23-7794-40fc-845a-73d83eb55450] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:12:27.289478  156414 system_pods.go:61] "kube-proxy-9xwt2" [f637605b-9bc2-4922-b801-04f681a81e7c] Running
	I0729 19:12:27.289485  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [476e1d42-6610-44b5-b77b-b25537f044eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:12:27.289495  156414 system_pods.go:61] "metrics-server-569cc877fc-xnkwq" [19328b03-8c7d-499f-9a30-31f023605e49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:12:27.289500  156414 system_pods.go:61] "storage-provisioner" [b049d065-fbb1-48b6-a377-2938a0519c78] Running
	I0729 19:12:27.289513  156414 system_pods.go:74] duration metric: took 14.789212ms to wait for pod list to return data ...
	I0729 19:12:27.289525  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:12:27.292692  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:12:27.292716  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:12:27.292809  156414 node_conditions.go:105] duration metric: took 3.278515ms to run NodePressure ...
	I0729 19:12:27.292825  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:27.563167  156414 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567532  156414 kubeadm.go:739] kubelet initialised
	I0729 19:12:27.567552  156414 kubeadm.go:740] duration metric: took 4.363687ms waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567566  156414 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:12:27.573549  156414 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:29.580511  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:32.080352  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:34.079542  156414 pod_ready.go:92] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:34.079564  156414 pod_ready.go:81] duration metric: took 6.505994438s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:34.079574  156414 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:36.086785  156414 pod_ready.go:102] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:37.089587  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.089611  156414 pod_ready.go:81] duration metric: took 3.010030448s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.089621  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093814  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.093833  156414 pod_ready.go:81] duration metric: took 4.206508ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093842  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097603  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.097621  156414 pod_ready.go:81] duration metric: took 3.772149ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097633  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101696  156414 pod_ready.go:92] pod "kube-proxy-9xwt2" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.101720  156414 pod_ready.go:81] duration metric: took 4.078653ms for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101732  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107711  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.107728  156414 pod_ready.go:81] duration metric: took 5.989066ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107738  156414 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:39.113796  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:41.115401  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:43.614461  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:45.614900  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:48.113738  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:50.114364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:52.114790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:54.613472  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:56.613889  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:59.114206  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:01.614516  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:04.114859  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:06.114993  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:08.615213  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:11.114015  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:13.114342  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:15.614423  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:18.114442  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:20.614465  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:23.117909  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:25.614155  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:28.114746  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:30.613689  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:32.614790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:35.113716  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:37.116362  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:39.614545  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:42.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:44.114654  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:46.114765  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:48.615523  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:51.114096  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:53.114180  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:55.614278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:58.114138  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:00.613679  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:02.614878  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:05.117278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:07.614025  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:09.614681  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:12.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:14.114458  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:16.614364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:19.114533  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:21.613756  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:24.114325  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:26.614276  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:29.114137  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:31.114274  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:33.115749  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:35.614067  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:37.614374  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:39.615618  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:42.114139  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:44.114503  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:46.114624  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:48.613926  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:50.614527  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:53.115129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:55.613563  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:57.615164  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:59.616129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:02.114384  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:04.114621  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:06.114864  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:08.115242  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:10.613949  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:13.115359  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:15.614560  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:17.615109  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:20.114341  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:22.115253  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:24.119792  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:26.614361  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:29.113806  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:31.114150  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:33.614207  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:35.616204  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:38.113264  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:40.615054  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:42.615127  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:45.115119  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:47.613589  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:49.613803  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:51.615235  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:54.113908  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:56.614614  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:59.114193  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:01.614642  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:04.114186  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:06.614156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:08.614216  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:10.615368  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:13.116263  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:15.613987  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:17.614183  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:19.617124  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:22.114156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:24.613643  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:26.613720  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:28.616174  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:31.114289  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:33.114818  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:35.614735  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:37.107998  156414 pod_ready.go:81] duration metric: took 4m0.000241864s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	E0729 19:16:37.108045  156414 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:16:37.108068  156414 pod_ready.go:38] duration metric: took 4m9.540493845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:16:37.108105  156414 kubeadm.go:597] duration metric: took 4m19.465427343s to restartPrimaryControlPlane
	W0729 19:16:37.108167  156414 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:16:37.108196  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:17:08.548650  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.44042578s)
	I0729 19:17:08.548730  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:17:08.564620  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:17:08.575061  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:17:08.585537  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:17:08.585566  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:17:08.585610  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:17:08.594641  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:17:08.594702  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:17:08.604434  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:17:08.613126  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:17:08.613177  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:17:08.622123  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:17:08.630620  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:17:08.630661  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:17:08.640140  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:17:08.648712  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:17:08.648768  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:17:08.658010  156414 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:17:08.709849  156414 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:17:08.709998  156414 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:17:08.850515  156414 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:17:08.850632  156414 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:17:08.850769  156414 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:17:09.057782  156414 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:17:09.059421  156414 out.go:204]   - Generating certificates and keys ...
	I0729 19:17:09.059494  156414 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:17:09.059566  156414 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:17:09.059636  156414 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:17:09.062277  156414 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:17:09.062401  156414 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:17:09.062475  156414 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:17:09.062526  156414 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:17:09.062616  156414 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:17:09.062695  156414 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:17:09.062807  156414 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:17:09.062863  156414 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:17:09.062933  156414 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:17:09.426782  156414 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:17:09.599745  156414 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:17:09.741530  156414 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:17:09.907315  156414 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:17:10.118045  156414 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:17:10.118623  156414 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:17:10.121594  156414 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:17:10.124052  156414 out.go:204]   - Booting up control plane ...
	I0729 19:17:10.124173  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:17:10.124267  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:17:10.124374  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:17:10.144903  156414 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:17:10.145010  156414 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:17:10.145047  156414 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:17:10.278905  156414 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:17:10.279025  156414 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:17:11.280964  156414 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002120381s
	I0729 19:17:11.281070  156414 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:17:15.782460  156414 kubeadm.go:310] [api-check] The API server is healthy after 4.501562605s
	I0729 19:17:15.804614  156414 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:17:15.822230  156414 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:17:15.849613  156414 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:17:15.849870  156414 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-368536 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:17:15.861910  156414 kubeadm.go:310] [bootstrap-token] Using token: zhramo.fqhnhxuylehyq043
	I0729 19:17:15.863215  156414 out.go:204]   - Configuring RBAC rules ...
	I0729 19:17:15.863352  156414 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:17:15.870893  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:17:15.886779  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:17:15.889933  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:17:15.893111  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:17:15.895970  156414 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:17:16.200928  156414 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:17:16.625621  156414 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:17:17.195772  156414 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:17:17.197712  156414 kubeadm.go:310] 
	I0729 19:17:17.197780  156414 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:17:17.197791  156414 kubeadm.go:310] 
	I0729 19:17:17.197874  156414 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:17:17.197885  156414 kubeadm.go:310] 
	I0729 19:17:17.197925  156414 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:17:17.198023  156414 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:17:17.198108  156414 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:17:17.198120  156414 kubeadm.go:310] 
	I0729 19:17:17.198190  156414 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:17:17.198200  156414 kubeadm.go:310] 
	I0729 19:17:17.198258  156414 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:17:17.198267  156414 kubeadm.go:310] 
	I0729 19:17:17.198347  156414 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:17:17.198451  156414 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:17:17.198529  156414 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:17:17.198539  156414 kubeadm.go:310] 
	I0729 19:17:17.198633  156414 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:17:17.198750  156414 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:17:17.198761  156414 kubeadm.go:310] 
	I0729 19:17:17.198895  156414 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zhramo.fqhnhxuylehyq043 \
	I0729 19:17:17.199041  156414 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f \
	I0729 19:17:17.199074  156414 kubeadm.go:310] 	--control-plane 
	I0729 19:17:17.199081  156414 kubeadm.go:310] 
	I0729 19:17:17.199199  156414 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:17:17.199210  156414 kubeadm.go:310] 
	I0729 19:17:17.199327  156414 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zhramo.fqhnhxuylehyq043 \
	I0729 19:17:17.199478  156414 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f 
	I0729 19:17:17.200591  156414 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:17:17.200629  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:17:17.200642  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:17:17.202541  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:17:17.203847  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:17:17.214711  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:17:17.233233  156414 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:17:17.233330  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:17.233332  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-368536 minikube.k8s.io/updated_at=2024_07_29T19_17_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=embed-certs-368536 minikube.k8s.io/primary=true
	I0729 19:17:17.265931  156414 ops.go:34] apiserver oom_adj: -16
	I0729 19:17:17.410594  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:17.911585  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:18.410650  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:18.911432  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:19.411062  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:19.911629  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:20.411050  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:20.911004  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:21.411031  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:21.910787  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:22.411228  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:22.911181  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:23.410624  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:23.910844  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:24.411409  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:24.910745  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:25.410675  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:25.910901  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:26.411562  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:26.911505  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:27.411552  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:27.910916  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:28.410868  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:28.911466  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.410633  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.911613  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.992725  156414 kubeadm.go:1113] duration metric: took 12.75946311s to wait for elevateKubeSystemPrivileges
	I0729 19:17:29.992767  156414 kubeadm.go:394] duration metric: took 5m12.400472687s to StartCluster
	I0729 19:17:29.992793  156414 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:17:29.992902  156414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:17:29.994489  156414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:17:29.994792  156414 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:17:29.994828  156414 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:17:29.994917  156414 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-368536"
	I0729 19:17:29.994954  156414 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-368536"
	I0729 19:17:29.994957  156414 addons.go:69] Setting default-storageclass=true in profile "embed-certs-368536"
	W0729 19:17:29.994966  156414 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:17:29.994969  156414 addons.go:69] Setting metrics-server=true in profile "embed-certs-368536"
	I0729 19:17:29.995004  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:29.995003  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:17:29.995028  156414 addons.go:234] Setting addon metrics-server=true in "embed-certs-368536"
	W0729 19:17:29.995041  156414 addons.go:243] addon metrics-server should already be in state true
	I0729 19:17:29.994986  156414 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-368536"
	I0729 19:17:29.995073  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:29.995409  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995457  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.995460  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995487  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.995487  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995636  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.997279  156414 out.go:177] * Verifying Kubernetes components...
	I0729 19:17:29.998614  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:17:30.011510  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I0729 19:17:30.011717  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0729 19:17:30.011970  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.012063  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.012480  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.012505  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.012626  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.012651  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.012967  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.013105  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.013284  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.013527  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.013574  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.014086  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0729 19:17:30.014502  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.015001  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.015018  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.015505  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.016720  156414 addons.go:234] Setting addon default-storageclass=true in "embed-certs-368536"
	W0729 19:17:30.016740  156414 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:17:30.016770  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:30.017091  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.017123  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.017432  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.017477  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.034798  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0729 19:17:30.035372  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.036179  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.036207  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.037055  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37901
	I0729 19:17:30.037161  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0729 19:17:30.036581  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.037493  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.037581  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.037636  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.038047  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.038056  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.038073  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.038217  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.038403  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.038623  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.038627  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.039185  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.039221  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.040574  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.040687  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.042879  156414 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:17:30.042873  156414 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:17:30.044279  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:17:30.044298  156414 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:17:30.044324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.044544  156414 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:17:30.044593  156414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:17:30.044621  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.048075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048402  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048442  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.048462  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048613  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.048761  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.048845  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.048890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.048914  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.049132  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.049289  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.049306  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.049441  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.049593  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.055718  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34081
	I0729 19:17:30.056086  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.056521  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.056546  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.056931  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.057098  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.058559  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.058795  156414 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:17:30.058810  156414 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:17:30.058825  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.061253  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.061842  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.061880  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.061900  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.062053  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.062195  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.062346  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.192595  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:17:30.208960  156414 node_ready.go:35] waiting up to 6m0s for node "embed-certs-368536" to be "Ready" ...
	I0729 19:17:30.216230  156414 node_ready.go:49] node "embed-certs-368536" has status "Ready":"True"
	I0729 19:17:30.216247  156414 node_ready.go:38] duration metric: took 7.255724ms for node "embed-certs-368536" to be "Ready" ...
	I0729 19:17:30.216256  156414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:17:30.219988  156414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.224074  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.224099  156414 pod_ready.go:81] duration metric: took 4.088257ms for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.224109  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.228389  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.228409  156414 pod_ready.go:81] duration metric: took 4.292723ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.228417  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.233616  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.233634  156414 pod_ready.go:81] duration metric: took 5.212376ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.233642  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.242933  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.242951  156414 pod_ready.go:81] duration metric: took 9.302507ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.242959  156414 pod_ready.go:38] duration metric: took 26.692394ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:17:30.242973  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:17:30.243016  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:17:30.261484  156414 api_server.go:72] duration metric: took 266.652937ms to wait for apiserver process to appear ...
	I0729 19:17:30.261513  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:17:30.261534  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:17:30.269760  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:17:30.270848  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:17:30.270872  156414 api_server.go:131] duration metric: took 9.352433ms to wait for apiserver health ...
	I0729 19:17:30.270880  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:17:30.312744  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:17:30.317547  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:17:30.317570  156414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:17:30.332468  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:17:30.352498  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:17:30.352531  156414 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:17:30.392028  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:17:30.392055  156414 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:17:30.413559  156414 system_pods.go:59] 4 kube-system pods found
	I0729 19:17:30.413586  156414 system_pods.go:61] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:30.413591  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:30.413595  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:30.413598  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:30.413603  156414 system_pods.go:74] duration metric: took 142.71846ms to wait for pod list to return data ...
	I0729 19:17:30.413610  156414 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:17:30.424371  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:17:30.615212  156414 default_sa.go:45] found service account: "default"
	I0729 19:17:30.615237  156414 default_sa.go:55] duration metric: took 201.621467ms for default service account to be created ...
	I0729 19:17:30.615246  156414 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:17:30.831144  156414 system_pods.go:86] 4 kube-system pods found
	I0729 19:17:30.831175  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:30.831182  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:30.831186  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:30.831190  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:30.831210  156414 retry.go:31] will retry after 301.650623ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.127532  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127599  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127595  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127620  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127910  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.127925  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.127935  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127943  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.127974  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.127985  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.127999  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.128008  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.128212  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.128221  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.128440  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.128455  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.128467  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.155504  156414 system_pods.go:86] 8 kube-system pods found
	I0729 19:17:31.155543  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.155559  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.155565  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.155570  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.155575  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.155580  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:17:31.155586  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.155590  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending
	I0729 19:17:31.155606  156414 retry.go:31] will retry after 310.574298ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.159525  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.159546  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.160952  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.160961  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.160976  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.346360  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.346390  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.346700  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.346718  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.346732  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.346742  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.347006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.347052  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.347059  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.347075  156414 addons.go:475] Verifying addon metrics-server=true in "embed-certs-368536"
	I0729 19:17:31.348884  156414 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:17:31.350473  156414 addons.go:510] duration metric: took 1.355642198s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:17:31.473514  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:31.473553  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.473561  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.473567  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.473573  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.473578  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.473583  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:17:31.473587  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.473596  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:31.473605  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:31.473622  156414 retry.go:31] will retry after 446.790872ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.928348  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:31.928381  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.928389  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.928396  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.928401  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.928406  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.928409  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:31.928413  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.928420  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:31.928429  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:31.928444  156414 retry.go:31] will retry after 467.830899ms: missing components: kube-dns
	I0729 19:17:32.403619  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:32.403649  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:32.403659  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:32.403665  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:32.403670  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:32.403676  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:32.403683  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:32.403689  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:32.403697  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:32.403706  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:32.403729  156414 retry.go:31] will retry after 745.010861ms: missing components: kube-dns
	I0729 19:17:33.163660  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:33.163697  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:33.163710  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:33.163719  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:33.163733  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:33.163740  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:33.163746  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:33.163751  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:33.163761  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:33.163770  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Running
	I0729 19:17:33.163791  156414 retry.go:31] will retry after 658.944312ms: missing components: kube-dns
	I0729 19:17:33.830608  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:33.830643  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Running
	I0729 19:17:33.830650  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Running
	I0729 19:17:33.830656  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:33.830662  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:33.830670  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:33.830675  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:33.830682  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:33.830692  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:33.830703  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Running
	I0729 19:17:33.830714  156414 system_pods.go:126] duration metric: took 3.215460876s to wait for k8s-apps to be running ...
	I0729 19:17:33.830726  156414 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:17:33.830824  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:17:33.847810  156414 system_svc.go:56] duration metric: took 17.074145ms WaitForService to wait for kubelet
	I0729 19:17:33.847837  156414 kubeadm.go:582] duration metric: took 3.853011216s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:17:33.847861  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:17:33.850180  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:17:33.850198  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:17:33.850209  156414 node_conditions.go:105] duration metric: took 2.342951ms to run NodePressure ...
	I0729 19:17:33.850221  156414 start.go:241] waiting for startup goroutines ...
	I0729 19:17:33.850230  156414 start.go:246] waiting for cluster config update ...
	I0729 19:17:33.850242  156414 start.go:255] writing updated cluster config ...
	I0729 19:17:33.850512  156414 ssh_runner.go:195] Run: rm -f paused
	I0729 19:17:33.898396  156414 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:17:33.899771  156414 out.go:177] * Done! kubectl is now configured to use "embed-certs-368536" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.320423048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280820320392299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3151613b-b7bd-4e77-8e23-5a2ac223fe06 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.323455568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0483ac49-6e19-4e49-9f68-5e2b1ca4a7a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.323705010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0483ac49-6e19-4e49-9f68-5e2b1ca4a7a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.324167606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ca77d323edde3a471cc09829107e4b60723540d97038b6563f3515f5632480,PodSandboxId:401d89d64492aacbf7f3a9be7bb29b22e09fe57410a3a90c0d30ea6acb1c67d4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279900306681926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c9ecca1-222d-423a-a4a8-617b3b5dceaf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05dd5140f888cf33e8da2f08c25b80769c5b124f7e0d009eda79f69f0198a964,PodSandboxId:1fab013325b34608f64e4ea772c3b59e00e46345163f35e2d1517f5cba538bf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899738581731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sqjsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afcfe5e-4f63-47fc-a382-d2485c80fd87,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2c6bd4e858e35e487a738492bd7390c9318baf1dbe8f3e7fb1be5aba587f35,PodSandboxId:40866c15eaf8a6a5e4f9936d43e712088192d4dfd4ed7502d7614e27d52dc512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899807179379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w7ptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
9df116-aead-4d87-ade9-397d402c6a9b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf1d196ad19dcb91c3e74f025026a283ee2d7119044448e7baeffdf7d3e36bd,PodSandboxId:2078f2612db573faa57e329b4061a9f9c54a65348237c13eb76ef7585cdb5886,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722279898613706363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzrdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047bc0eb-0615-4a77-a835-99a264b0b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774b6f05ee3602d6336ae397ef0e99a0c2efdb0e8e875055fb477f14ad7c18d9,PodSandboxId:01fb15d87a6fd9f15dadac3916ec96d1101ffd640b5d737afa885cd788903915,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722279887600009834,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f315ab615ee060c6dba20ec59c10b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565b7d4870cc8f097db55c6187e1917737e3e89601c40ad729ce4bcc12cef063,PodSandboxId:c1b2379aa4108f6d02bf5d9bb8dfe8f16a7005172004cdb6a2fc4f529ce9ca1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722279887650207443,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c92beaad6c72a3c97d4dc7f6d1bd4,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db5c6899215ea5671d54ffdbef718a3ed7472a7e909b713e0cd3e3924b9aff5a,PodSandboxId:2c2158c077b71d9cb1ecf5b2a581ed85e76fc457a84c3d5c0c331fef9f20f319,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722279887611204045,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd43d18d99a26b96f54dd633497cef2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0123ed63d3bb67b7ed32c2266ea6014d1f31cd96ef54a155fa5f07642548a9f,PodSandboxId:eb424fbb490db28bef3d8dbf82d410d397cfd402eb3faac07ba3a174221a620e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722279887524150874,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d7fa5f82e2cfb7f4316f748e51228a62bcf15c90cc7a6f2f50c04e84fc6cfb,PodSandboxId:08b1ebb8c2b21bd270b8d1b41c670ee096631918c3e48940cf706bc719221c11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722279608178041451,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0483ac49-6e19-4e49-9f68-5e2b1ca4a7a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.362985950Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecf5c11b-a949-4935-95a6-e2aa5ae35841 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.363096920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecf5c11b-a949-4935-95a6-e2aa5ae35841 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.364420080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47ff3872-9d5d-49d7-abb1-63f77f3f5775 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.364889812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280820364867340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47ff3872-9d5d-49d7-abb1-63f77f3f5775 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.365471822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f78ba1b0-39c3-4439-a332-57e9564a5ad6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.365540303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f78ba1b0-39c3-4439-a332-57e9564a5ad6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.365791170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ca77d323edde3a471cc09829107e4b60723540d97038b6563f3515f5632480,PodSandboxId:401d89d64492aacbf7f3a9be7bb29b22e09fe57410a3a90c0d30ea6acb1c67d4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279900306681926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c9ecca1-222d-423a-a4a8-617b3b5dceaf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05dd5140f888cf33e8da2f08c25b80769c5b124f7e0d009eda79f69f0198a964,PodSandboxId:1fab013325b34608f64e4ea772c3b59e00e46345163f35e2d1517f5cba538bf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899738581731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sqjsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afcfe5e-4f63-47fc-a382-d2485c80fd87,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2c6bd4e858e35e487a738492bd7390c9318baf1dbe8f3e7fb1be5aba587f35,PodSandboxId:40866c15eaf8a6a5e4f9936d43e712088192d4dfd4ed7502d7614e27d52dc512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899807179379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w7ptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
9df116-aead-4d87-ade9-397d402c6a9b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf1d196ad19dcb91c3e74f025026a283ee2d7119044448e7baeffdf7d3e36bd,PodSandboxId:2078f2612db573faa57e329b4061a9f9c54a65348237c13eb76ef7585cdb5886,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722279898613706363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzrdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047bc0eb-0615-4a77-a835-99a264b0b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774b6f05ee3602d6336ae397ef0e99a0c2efdb0e8e875055fb477f14ad7c18d9,PodSandboxId:01fb15d87a6fd9f15dadac3916ec96d1101ffd640b5d737afa885cd788903915,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722279887600009834,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f315ab615ee060c6dba20ec59c10b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565b7d4870cc8f097db55c6187e1917737e3e89601c40ad729ce4bcc12cef063,PodSandboxId:c1b2379aa4108f6d02bf5d9bb8dfe8f16a7005172004cdb6a2fc4f529ce9ca1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722279887650207443,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c92beaad6c72a3c97d4dc7f6d1bd4,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db5c6899215ea5671d54ffdbef718a3ed7472a7e909b713e0cd3e3924b9aff5a,PodSandboxId:2c2158c077b71d9cb1ecf5b2a581ed85e76fc457a84c3d5c0c331fef9f20f319,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722279887611204045,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd43d18d99a26b96f54dd633497cef2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0123ed63d3bb67b7ed32c2266ea6014d1f31cd96ef54a155fa5f07642548a9f,PodSandboxId:eb424fbb490db28bef3d8dbf82d410d397cfd402eb3faac07ba3a174221a620e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722279887524150874,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d7fa5f82e2cfb7f4316f748e51228a62bcf15c90cc7a6f2f50c04e84fc6cfb,PodSandboxId:08b1ebb8c2b21bd270b8d1b41c670ee096631918c3e48940cf706bc719221c11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722279608178041451,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f78ba1b0-39c3-4439-a332-57e9564a5ad6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.408270296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1982681-737b-42ca-90bb-9a87265e5839 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.408357736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1982681-737b-42ca-90bb-9a87265e5839 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.410295751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4236ff7-60b5-4b30-b015-7c2799b327ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.410743956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280820410603761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4236ff7-60b5-4b30-b015-7c2799b327ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.411441324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9eb19145-2d36-430d-adf9-e544d032647c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.411513120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9eb19145-2d36-430d-adf9-e544d032647c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.411754031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ca77d323edde3a471cc09829107e4b60723540d97038b6563f3515f5632480,PodSandboxId:401d89d64492aacbf7f3a9be7bb29b22e09fe57410a3a90c0d30ea6acb1c67d4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279900306681926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c9ecca1-222d-423a-a4a8-617b3b5dceaf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05dd5140f888cf33e8da2f08c25b80769c5b124f7e0d009eda79f69f0198a964,PodSandboxId:1fab013325b34608f64e4ea772c3b59e00e46345163f35e2d1517f5cba538bf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899738581731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sqjsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afcfe5e-4f63-47fc-a382-d2485c80fd87,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2c6bd4e858e35e487a738492bd7390c9318baf1dbe8f3e7fb1be5aba587f35,PodSandboxId:40866c15eaf8a6a5e4f9936d43e712088192d4dfd4ed7502d7614e27d52dc512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899807179379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w7ptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
9df116-aead-4d87-ade9-397d402c6a9b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf1d196ad19dcb91c3e74f025026a283ee2d7119044448e7baeffdf7d3e36bd,PodSandboxId:2078f2612db573faa57e329b4061a9f9c54a65348237c13eb76ef7585cdb5886,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722279898613706363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzrdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047bc0eb-0615-4a77-a835-99a264b0b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774b6f05ee3602d6336ae397ef0e99a0c2efdb0e8e875055fb477f14ad7c18d9,PodSandboxId:01fb15d87a6fd9f15dadac3916ec96d1101ffd640b5d737afa885cd788903915,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722279887600009834,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f315ab615ee060c6dba20ec59c10b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565b7d4870cc8f097db55c6187e1917737e3e89601c40ad729ce4bcc12cef063,PodSandboxId:c1b2379aa4108f6d02bf5d9bb8dfe8f16a7005172004cdb6a2fc4f529ce9ca1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722279887650207443,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c92beaad6c72a3c97d4dc7f6d1bd4,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db5c6899215ea5671d54ffdbef718a3ed7472a7e909b713e0cd3e3924b9aff5a,PodSandboxId:2c2158c077b71d9cb1ecf5b2a581ed85e76fc457a84c3d5c0c331fef9f20f319,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722279887611204045,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd43d18d99a26b96f54dd633497cef2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0123ed63d3bb67b7ed32c2266ea6014d1f31cd96ef54a155fa5f07642548a9f,PodSandboxId:eb424fbb490db28bef3d8dbf82d410d397cfd402eb3faac07ba3a174221a620e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722279887524150874,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d7fa5f82e2cfb7f4316f748e51228a62bcf15c90cc7a6f2f50c04e84fc6cfb,PodSandboxId:08b1ebb8c2b21bd270b8d1b41c670ee096631918c3e48940cf706bc719221c11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722279608178041451,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9eb19145-2d36-430d-adf9-e544d032647c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.450518705Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7fb85f7-5733-40ce-ba1e-68a91e6be484 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.450664322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7fb85f7-5733-40ce-ba1e-68a91e6be484 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.451562584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed18e4e9-fc77-4ccf-8e32-bab447b6ff49 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.452077915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280820452051476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed18e4e9-fc77-4ccf-8e32-bab447b6ff49 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.452607062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ca5938a-cbb3-4224-a24d-d1910691f5df name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.452734834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ca5938a-cbb3-4224-a24d-d1910691f5df name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:20:20 no-preload-524369 crio[730]: time="2024-07-29 19:20:20.453013117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ca77d323edde3a471cc09829107e4b60723540d97038b6563f3515f5632480,PodSandboxId:401d89d64492aacbf7f3a9be7bb29b22e09fe57410a3a90c0d30ea6acb1c67d4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722279900306681926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c9ecca1-222d-423a-a4a8-617b3b5dceaf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05dd5140f888cf33e8da2f08c25b80769c5b124f7e0d009eda79f69f0198a964,PodSandboxId:1fab013325b34608f64e4ea772c3b59e00e46345163f35e2d1517f5cba538bf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899738581731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sqjsh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afcfe5e-4f63-47fc-a382-d2485c80fd87,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2c6bd4e858e35e487a738492bd7390c9318baf1dbe8f3e7fb1be5aba587f35,PodSandboxId:40866c15eaf8a6a5e4f9936d43e712088192d4dfd4ed7502d7614e27d52dc512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722279899807179379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-w7ptq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
9df116-aead-4d87-ade9-397d402c6a9b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf1d196ad19dcb91c3e74f025026a283ee2d7119044448e7baeffdf7d3e36bd,PodSandboxId:2078f2612db573faa57e329b4061a9f9c54a65348237c13eb76ef7585cdb5886,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722279898613706363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzrdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047bc0eb-0615-4a77-a835-99a264b0b5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774b6f05ee3602d6336ae397ef0e99a0c2efdb0e8e875055fb477f14ad7c18d9,PodSandboxId:01fb15d87a6fd9f15dadac3916ec96d1101ffd640b5d737afa885cd788903915,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722279887600009834,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8f315ab615ee060c6dba20ec59c10b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565b7d4870cc8f097db55c6187e1917737e3e89601c40ad729ce4bcc12cef063,PodSandboxId:c1b2379aa4108f6d02bf5d9bb8dfe8f16a7005172004cdb6a2fc4f529ce9ca1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722279887650207443,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c92beaad6c72a3c97d4dc7f6d1bd4,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db5c6899215ea5671d54ffdbef718a3ed7472a7e909b713e0cd3e3924b9aff5a,PodSandboxId:2c2158c077b71d9cb1ecf5b2a581ed85e76fc457a84c3d5c0c331fef9f20f319,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722279887611204045,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fd43d18d99a26b96f54dd633497cef2,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0123ed63d3bb67b7ed32c2266ea6014d1f31cd96ef54a155fa5f07642548a9f,PodSandboxId:eb424fbb490db28bef3d8dbf82d410d397cfd402eb3faac07ba3a174221a620e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722279887524150874,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d7fa5f82e2cfb7f4316f748e51228a62bcf15c90cc7a6f2f50c04e84fc6cfb,PodSandboxId:08b1ebb8c2b21bd270b8d1b41c670ee096631918c3e48940cf706bc719221c11,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722279608178041451,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-524369,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ad145f5a950f7c5aa599aef2bca250,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ca5938a-cbb3-4224-a24d-d1910691f5df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	39ca77d323edd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   401d89d64492a       storage-provisioner
	ab2c6bd4e858e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   40866c15eaf8a       coredns-5cfdc65f69-w7ptq
	05dd5140f888c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   1fab013325b34       coredns-5cfdc65f69-sqjsh
	ecf1d196ad19d       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   15 minutes ago      Running             kube-proxy                0                   2078f2612db57       kube-proxy-fzrdv
	565b7d4870cc8       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   15 minutes ago      Running             etcd                      2                   c1b2379aa4108       etcd-no-preload-524369
	db5c6899215ea       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   15 minutes ago      Running             kube-controller-manager   2                   2c2158c077b71       kube-controller-manager-no-preload-524369
	774b6f05ee360       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   15 minutes ago      Running             kube-scheduler            2                   01fb15d87a6fd       kube-scheduler-no-preload-524369
	b0123ed63d3bb       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   15 minutes ago      Running             kube-apiserver            2                   eb424fbb490db       kube-apiserver-no-preload-524369
	93d7fa5f82e2c       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   20 minutes ago      Exited              kube-apiserver            1                   08b1ebb8c2b21       kube-apiserver-no-preload-524369
	
	
	==> coredns [05dd5140f888cf33e8da2f08c25b80769c5b124f7e0d009eda79f69f0198a964] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ab2c6bd4e858e35e487a738492bd7390c9318baf1dbe8f3e7fb1be5aba587f35] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-524369
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-524369
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=no-preload-524369
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_04_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:04:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-524369
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:20:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:15:15 +0000   Mon, 29 Jul 2024 19:04:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:15:15 +0000   Mon, 29 Jul 2024 19:04:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:15:15 +0000   Mon, 29 Jul 2024 19:04:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:15:15 +0000   Mon, 29 Jul 2024 19:04:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.7
	  Hostname:    no-preload-524369
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a198bd9cf7443ae868352cb5dee02a8
	  System UUID:                2a198bd9-cf74-43ae-8683-52cb5dee02a8
	  Boot ID:                    e2d860a1-cb75-47b3-a4d7-33e5fbc5df5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-sqjsh                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5cfdc65f69-w7ptq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-524369                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-524369             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-524369    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-fzrdv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-524369             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-78fcd8795b-l6hjr              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node no-preload-524369 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node no-preload-524369 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-524369 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node no-preload-524369 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node no-preload-524369 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node no-preload-524369 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node no-preload-524369 event: Registered Node no-preload-524369 in Controller
	
	
	==> dmesg <==
	[  +0.046879] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.147467] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.555208] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606893] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.637768] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.053048] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055013] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.191053] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.145799] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.275273] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[Jul29 19:00] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.060068] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.745757] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +4.522712] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.919707] kauditd_printk_skb: 86 callbacks suppressed
	[ +26.106375] kauditd_printk_skb: 3 callbacks suppressed
	[Jul29 19:04] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.503919] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +4.689688] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.892769] systemd-fstab-generator[3252]: Ignoring "noauto" option for root device
	[  +5.424249] systemd-fstab-generator[3368]: Ignoring "noauto" option for root device
	[  +0.116696] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 19:05] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [565b7d4870cc8f097db55c6187e1917737e3e89601c40ad729ce4bcc12cef063] <==
	{"level":"info","ts":"2024-07-29T19:04:48.987131Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:12:19.166056Z","caller":"traceutil/trace.go:171","msg":"trace[754036214] linearizableReadLoop","detail":"{readStateIndex:959; appliedIndex:958; }","duration":"385.098583ms","start":"2024-07-29T19:12:18.780867Z","end":"2024-07-29T19:12:19.165966Z","steps":["trace[754036214] 'read index received'  (duration: 384.921586ms)","trace[754036214] 'applied index is now lower than readState.Index'  (duration: 176.466µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T19:12:19.166225Z","caller":"traceutil/trace.go:171","msg":"trace[1167817960] transaction","detail":"{read_only:false; response_revision:856; number_of_response:1; }","duration":"586.660456ms","start":"2024-07-29T19:12:18.579556Z","end":"2024-07-29T19:12:19.166216Z","steps":["trace[1167817960] 'process raft request'  (duration: 586.282809ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.167005Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:18.579531Z","time spent":"586.715014ms","remote":"127.0.0.1:39992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-nrajhd2gmtrsbqymzeo77wiq6y\" mod_revision:848 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-nrajhd2gmtrsbqymzeo77wiq6y\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-nrajhd2gmtrsbqymzeo77wiq6y\" > >"}
	{"level":"warn","ts":"2024-07-29T19:12:19.167229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"386.347065ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-07-29T19:12:19.167376Z","caller":"traceutil/trace.go:171","msg":"trace[1555561073] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:856; }","duration":"386.495398ms","start":"2024-07-29T19:12:18.780863Z","end":"2024-07-29T19:12:19.167358Z","steps":["trace[1555561073] 'agreement among raft nodes before linearized reading'  (duration: 386.263983ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.167402Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:18.780831Z","time spent":"386.562855ms","remote":"127.0.0.1:39916","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1142,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-07-29T19:12:19.167529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.257138ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.167565Z","caller":"traceutil/trace.go:171","msg":"trace[1917566478] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:856; }","duration":"354.293166ms","start":"2024-07-29T19:12:18.813266Z","end":"2024-07-29T19:12:19.167559Z","steps":["trace[1917566478] 'agreement among raft nodes before linearized reading'  (duration: 354.246203ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.168031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.174502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.168108Z","caller":"traceutil/trace.go:171","msg":"trace[1547239571] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:856; }","duration":"300.269384ms","start":"2024-07-29T19:12:18.867832Z","end":"2024-07-29T19:12:19.168102Z","steps":["trace[1547239571] 'agreement among raft nodes before linearized reading'  (duration: 300.044313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.168134Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:18.86779Z","time spent":"300.338658ms","remote":"127.0.0.1:40036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":29,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-07-29T19:12:19.772357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"485.827759ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8564880020026481334 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:855 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T19:12:19.772456Z","caller":"traceutil/trace.go:171","msg":"trace[861697580] linearizableReadLoop","detail":"{readStateIndex:960; appliedIndex:959; }","duration":"394.397689ms","start":"2024-07-29T19:12:19.378045Z","end":"2024-07-29T19:12:19.772442Z","steps":["trace[861697580] 'read index received'  (duration: 27.437µs)","trace[861697580] 'applied index is now lower than readState.Index'  (duration: 394.369037ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:12:19.772572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"394.51736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T19:12:19.772703Z","caller":"traceutil/trace.go:171","msg":"trace[518326760] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:857; }","duration":"394.652062ms","start":"2024-07-29T19:12:19.37804Z","end":"2024-07-29T19:12:19.772692Z","steps":["trace[518326760] 'agreement among raft nodes before linearized reading'  (duration: 394.447526ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:12:19.77275Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:19.378002Z","time spent":"394.739298ms","remote":"127.0.0.1:39938","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-07-29T19:12:19.773052Z","caller":"traceutil/trace.go:171","msg":"trace[372355499] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"601.058857ms","start":"2024-07-29T19:12:19.17198Z","end":"2024-07-29T19:12:19.773039Z","steps":["trace[372355499] 'process raft request'  (duration: 114.212232ms)","trace[372355499] 'compare'  (duration: 485.660386ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:12:19.773158Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T19:12:19.171965Z","time spent":"601.148816ms","remote":"127.0.0.1:39916","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:855 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-07-29T19:14:49.012822Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":734}
	{"level":"info","ts":"2024-07-29T19:14:49.022382Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":734,"took":"9.179641ms","hash":138454541,"current-db-size-bytes":2322432,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2322432,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-29T19:14:49.022456Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":138454541,"revision":734,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T19:19:49.019429Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":976}
	{"level":"info","ts":"2024-07-29T19:19:49.024527Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":976,"took":"4.319251ms","hash":3597177652,"current-db-size-bytes":2322432,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-29T19:19:49.024661Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3597177652,"revision":976,"compact-revision":734}
	
	
	==> kernel <==
	 19:20:20 up 20 min,  0 users,  load average: 0.11, 0.23, 0.18
	Linux no-preload-524369 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [93d7fa5f82e2cfb7f4316f748e51228a62bcf15c90cc7a6f2f50c04e84fc6cfb] <==
	W0729 19:04:43.741946       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.742069       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.747568       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.780764       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.785324       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.804057       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.806408       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.823970       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.844710       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.903239       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.923926       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.954784       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:43.965589       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.027054       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.115266       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.126936       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.210717       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.216189       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.222723       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.238113       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.262938       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.272729       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.300901       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.454522       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:04:44.466940       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b0123ed63d3bb67b7ed32c2266ea6014d1f31cd96ef54a155fa5f07642548a9f] <==
	I0729 19:15:51.380155       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 19:15:51.380195       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:17:51.381191       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:17:51.381353       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 19:17:51.381427       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:17:51.381445       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 19:17:51.382514       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 19:17:51.382568       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:19:50.381171       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:19:50.381760       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 19:19:51.383090       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:19:51.383224       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0729 19:19:51.383109       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:19:51.383350       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 19:19:51.384392       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 19:19:51.384423       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [db5c6899215ea5671d54ffdbef718a3ed7472a7e909b713e0cd3e3924b9aff5a] <==
	E0729 19:14:58.403918       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:14:58.464897       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:15:15.163454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-524369"
	E0729 19:15:28.411484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:15:28.472266       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:15:57.253381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="476.818µs"
	E0729 19:15:58.419213       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:15:58.481578       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:16:08.248133       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="160.874µs"
	E0729 19:16:28.427321       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:16:28.490188       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:16:58.437889       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:16:58.497559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:17:28.446597       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:17:28.506081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:17:58.454367       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:17:58.515767       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:18:28.463214       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:18:28.524227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:18:58.470745       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:18:58.533200       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:19:28.481196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:19:28.543955       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:19:58.487851       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:19:58.551776       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ecf1d196ad19dcb91c3e74f025026a283ee2d7119044448e7baeffdf7d3e36bd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 19:04:59.058510       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 19:04:59.071860       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.7"]
	E0729 19:04:59.071943       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 19:04:59.134763       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 19:04:59.134825       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:04:59.134860       1 server_linux.go:170] "Using iptables Proxier"
	I0729 19:04:59.143573       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 19:04:59.143994       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 19:04:59.144009       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:04:59.148432       1 config.go:104] "Starting endpoint slice config controller"
	I0729 19:04:59.148445       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:04:59.148482       1 config.go:197] "Starting service config controller"
	I0729 19:04:59.148486       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:04:59.149289       1 config.go:326] "Starting node config controller"
	I0729 19:04:59.149298       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:04:59.249359       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:04:59.249454       1 shared_informer.go:320] Caches are synced for node config
	I0729 19:04:59.249465       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [774b6f05ee3602d6336ae397ef0e99a0c2efdb0e8e875055fb477f14ad7c18d9] <==
	W0729 19:04:50.452149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:04:50.452181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:50.452266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:04:50.452297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:50.452370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 19:04:50.452400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:50.452494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:50.452556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:50.452689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:04:50.452738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.320978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:04:51.321021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.412861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:04:51.413058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.446274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:04:51.446397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.534365       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:04:51.534452       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0729 19:04:51.613196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 19:04:51.613292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.620660       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:04:51.620801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:04:51.641392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 19:04:51.642477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0729 19:04:53.227679       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:17:53 no-preload-524369 kubelet[3259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:17:53 no-preload-524369 kubelet[3259]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:17:53 no-preload-524369 kubelet[3259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:17:53 no-preload-524369 kubelet[3259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:17:59 no-preload-524369 kubelet[3259]: E0729 19:17:59.235238    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:18:10 no-preload-524369 kubelet[3259]: E0729 19:18:10.235393    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:18:23 no-preload-524369 kubelet[3259]: E0729 19:18:23.237606    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:18:37 no-preload-524369 kubelet[3259]: E0729 19:18:37.234976    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:18:50 no-preload-524369 kubelet[3259]: E0729 19:18:50.235017    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:18:53 no-preload-524369 kubelet[3259]: E0729 19:18:53.300842    3259 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:18:53 no-preload-524369 kubelet[3259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:18:53 no-preload-524369 kubelet[3259]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:18:53 no-preload-524369 kubelet[3259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:18:53 no-preload-524369 kubelet[3259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:19:03 no-preload-524369 kubelet[3259]: E0729 19:19:03.235800    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:19:17 no-preload-524369 kubelet[3259]: E0729 19:19:17.235192    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:19:31 no-preload-524369 kubelet[3259]: E0729 19:19:31.234954    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:19:44 no-preload-524369 kubelet[3259]: E0729 19:19:44.235223    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:19:53 no-preload-524369 kubelet[3259]: E0729 19:19:53.299704    3259 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:19:53 no-preload-524369 kubelet[3259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:19:53 no-preload-524369 kubelet[3259]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:19:53 no-preload-524369 kubelet[3259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:19:53 no-preload-524369 kubelet[3259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:19:56 no-preload-524369 kubelet[3259]: E0729 19:19:56.234719    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	Jul 29 19:20:11 no-preload-524369 kubelet[3259]: E0729 19:20:11.235140    3259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-l6hjr" podUID="285c1ec8-47a6-4fcd-bcd5-6a5d7d53a06b"
	
	
	==> storage-provisioner [39ca77d323edde3a471cc09829107e4b60723540d97038b6563f3515f5632480] <==
	I0729 19:05:00.401576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:05:00.445056       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:05:00.447069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:05:00.464249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:05:00.475333       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-524369_5e220e07-4372-4972-98c3-b5615c647e2d!
	I0729 19:05:00.473814       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"368411b3-882c-4f89-a9d3-ebf34908c271", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-524369_5e220e07-4372-4972-98c3-b5615c647e2d became leader
	I0729 19:05:00.576177       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-524369_5e220e07-4372-4972-98c3-b5615c647e2d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-524369 -n no-preload-524369
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-524369 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-l6hjr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-524369 describe pod metrics-server-78fcd8795b-l6hjr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-524369 describe pod metrics-server-78fcd8795b-l6hjr: exit status 1 (61.831131ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-l6hjr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-524369 describe pod metrics-server-78fcd8795b-l6hjr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (371.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (151.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:17:40.400954   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:17:46.273527   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:18:18.903816   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
E0729 19:18:38.589684   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.89:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-834964 -n old-k8s-version-834964
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 2 (222.238534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-834964" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-834964 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-834964 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.807µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-834964 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 2 (216.265065ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-834964 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-834964        | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-524369                  | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-524369 --memory=2200                     | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC | 29 Jul 24 19:05 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-612270       | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 19:04 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-834964             | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-974855                              | cert-expiration-974855       | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:01 UTC |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-453780             | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-453780                  | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-453780 image list                           | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148539 | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | disable-driver-mounts-148539                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-368536            | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC | 29 Jul 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-368536                 | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:07 UTC | 29 Jul 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:07:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:07:08.883432  156414 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:07:08.883769  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883781  156414 out.go:304] Setting ErrFile to fd 2...
	I0729 19:07:08.883788  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883976  156414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 19:07:08.884534  156414 out.go:298] Setting JSON to false
	I0729 19:07:08.885578  156414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13749,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:07:08.885642  156414 start.go:139] virtualization: kvm guest
	I0729 19:07:08.888023  156414 out.go:177] * [embed-certs-368536] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:07:08.889601  156414 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 19:07:08.889601  156414 notify.go:220] Checking for updates...
	I0729 19:07:08.892436  156414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:07:08.893966  156414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:07:08.895257  156414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 19:07:08.896516  156414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:07:08.897746  156414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:07:08.899225  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:07:08.899588  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.899642  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.914943  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0729 19:07:08.915307  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.915885  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.915905  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.916305  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.916486  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.916703  156414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:07:08.917034  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.917074  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.931497  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0729 19:07:08.931857  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.932280  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.932300  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.932640  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.932819  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.968386  156414 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:07:08.969538  156414 start.go:297] selected driver: kvm2
	I0729 19:07:08.969556  156414 start.go:901] validating driver "kvm2" against &{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.969681  156414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:07:08.970358  156414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.970428  156414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:07:08.986808  156414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:07:08.987271  156414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:07:08.987351  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:07:08.987370  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:07:08.987443  156414 start.go:340] cluster config:
	{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.987605  156414 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.989202  156414 out.go:177] * Starting "embed-certs-368536" primary control-plane node in "embed-certs-368536" cluster
	I0729 19:07:08.990452  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:07:08.990496  156414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:07:08.990510  156414 cache.go:56] Caching tarball of preloaded images
	I0729 19:07:08.990604  156414 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:07:08.990618  156414 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:07:08.990746  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:07:08.990949  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:07:08.990994  156414 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:07:08.991013  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:07:08.991023  156414 fix.go:54] fixHost starting: 
	I0729 19:07:08.991314  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.991356  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:09.006100  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 19:07:09.006507  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:09.007034  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:09.007060  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:09.007401  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:09.007594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.007758  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:07:09.009424  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Running err=<nil>
	W0729 19:07:09.009448  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:07:09.011240  156414 out.go:177] * Updating the running kvm2 "embed-certs-368536" VM ...
	I0729 19:07:09.012305  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:07:09.012324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.012506  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:07:09.014924  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015334  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:07:09.015362  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015507  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:07:09.015664  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015796  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:07:09.016106  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:07:09.016288  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:07:09.016300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:07:11.913157  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:14.985074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:21.065092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:24.137179  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:30.217186  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:33.289154  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:41.417004  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:41.417303  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417326  152077 kubeadm.go:310] 
	I0729 19:07:41.417370  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:07:41.417434  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:07:41.417456  152077 kubeadm.go:310] 
	I0729 19:07:41.417514  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:07:41.417586  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:07:41.417720  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:07:41.417732  152077 kubeadm.go:310] 
	I0729 19:07:41.417870  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:07:41.417917  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:07:41.417972  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:07:41.417982  152077 kubeadm.go:310] 
	I0729 19:07:41.418104  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:07:41.418232  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:07:41.418247  152077 kubeadm.go:310] 
	I0729 19:07:41.418370  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:07:41.418477  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:07:41.418596  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:07:41.418696  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:07:41.418706  152077 kubeadm.go:310] 
	I0729 19:07:41.419442  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:07:41.419562  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:07:41.419660  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:07:41.419767  152077 kubeadm.go:394] duration metric: took 8m3.273724985s to StartCluster
	I0729 19:07:41.419853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:07:41.419923  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:07:41.466895  152077 cri.go:89] found id: ""
	I0729 19:07:41.466922  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.466933  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:07:41.466941  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:07:41.467013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:07:41.500836  152077 cri.go:89] found id: ""
	I0729 19:07:41.500876  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.500888  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:07:41.500896  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:07:41.500949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:07:41.534917  152077 cri.go:89] found id: ""
	I0729 19:07:41.534946  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.534958  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:07:41.534968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:07:41.535038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:07:41.570516  152077 cri.go:89] found id: ""
	I0729 19:07:41.570545  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.570556  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:07:41.570565  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:07:41.570640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:07:41.608833  152077 cri.go:89] found id: ""
	I0729 19:07:41.608881  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.608894  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:07:41.608902  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:07:41.608969  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:07:41.644079  152077 cri.go:89] found id: ""
	I0729 19:07:41.644114  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.644127  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:07:41.644136  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:07:41.644198  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:07:41.678991  152077 cri.go:89] found id: ""
	I0729 19:07:41.679019  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.679028  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:07:41.679035  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:07:41.679088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:07:41.722775  152077 cri.go:89] found id: ""
	I0729 19:07:41.722803  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.722815  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:07:41.722829  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:07:41.722857  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:07:41.841614  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:07:41.841641  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:07:41.841658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:07:41.945679  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:07:41.945716  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:07:41.984505  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:07:41.984536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:07:42.037376  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:07:42.037418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:07:42.051259  152077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:07:42.051310  152077 out.go:239] * 
	W0729 19:07:42.051369  152077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.051390  152077 out.go:239] * 
	W0729 19:07:42.052280  152077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:07:42.055255  152077 out.go:177] 
	W0729 19:07:42.056299  152077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.056362  152077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:07:42.056391  152077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:07:42.057745  152077 out.go:177] 
	I0729 19:07:42.413268  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:45.481071  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:51.561079  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:54.633091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:00.713100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:03.785182  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:09.865116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:12.937102  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:19.017094  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:22.093191  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:28.169116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:31.241130  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:37.321134  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:40.393092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:46.473118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:49.545166  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:55.625086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:58.697184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:04.777113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:07.849165  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:13.933065  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:17.001100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:23.081133  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:26.153086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:32.233178  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:35.305183  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:41.385184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:44.461106  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:50.537120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:53.609150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:59.689091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:02.761193  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:08.845074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:11.917067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:17.993090  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:21.065137  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:27.145098  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:30.217175  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:36.301060  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:39.369118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:45.449082  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:48.521097  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:54.601120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:57.673200  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:03.753116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:06.825136  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:12.905195  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:15.977118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:22.057076  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:25.129144  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:31.209150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:34.281164  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:40.365067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:43.437113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:46.437452  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:11:46.437532  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.437865  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:11:46.437902  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.438117  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:11:46.439690  156414 machine.go:97] duration metric: took 4m37.427371174s to provisionDockerMachine
	I0729 19:11:46.439733  156414 fix.go:56] duration metric: took 4m37.448711854s for fixHost
	I0729 19:11:46.439746  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 4m37.448735558s
	W0729 19:11:46.439780  156414 start.go:714] error starting host: provision: host is not running
	W0729 19:11:46.439926  156414 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:11:46.439941  156414 start.go:729] Will try again in 5 seconds ...
	I0729 19:11:51.440155  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:11:51.440266  156414 start.go:364] duration metric: took 69.381µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:11:51.440311  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:11:51.440322  156414 fix.go:54] fixHost starting: 
	I0729 19:11:51.440735  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:11:51.440764  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:11:51.455959  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0729 19:11:51.456443  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:11:51.457053  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:11:51.457078  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:11:51.457475  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:11:51.457721  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:11:51.457885  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:11:51.459531  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Stopped err=<nil>
	I0729 19:11:51.459558  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	W0729 19:11:51.459736  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:11:51.461603  156414 out.go:177] * Restarting existing kvm2 VM for "embed-certs-368536" ...
	I0729 19:11:51.462768  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Start
	I0729 19:11:51.462973  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring networks are active...
	I0729 19:11:51.463679  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network default is active
	I0729 19:11:51.464155  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network mk-embed-certs-368536 is active
	I0729 19:11:51.464571  156414 main.go:141] libmachine: (embed-certs-368536) Getting domain xml...
	I0729 19:11:51.465359  156414 main.go:141] libmachine: (embed-certs-368536) Creating domain...
	I0729 19:11:51.794918  156414 main.go:141] libmachine: (embed-certs-368536) Waiting to get IP...
	I0729 19:11:51.795679  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:51.796159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:51.796221  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:51.796150  157493 retry.go:31] will retry after 235.729444ms: waiting for machine to come up
	I0729 19:11:52.033554  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.034180  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.034207  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.034103  157493 retry.go:31] will retry after 323.595446ms: waiting for machine to come up
	I0729 19:11:52.359640  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.360204  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.360233  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.360143  157493 retry.go:31] will retry after 341.954873ms: waiting for machine to come up
	I0729 19:11:52.703779  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.704350  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.704373  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.704298  157493 retry.go:31] will retry after 385.738264ms: waiting for machine to come up
	I0729 19:11:53.091976  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.092451  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.092484  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.092397  157493 retry.go:31] will retry after 759.811264ms: waiting for machine to come up
	I0729 19:11:53.853241  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.853799  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.853826  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.853755  157493 retry.go:31] will retry after 761.294244ms: waiting for machine to come up
	I0729 19:11:54.616298  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:54.617009  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:54.617039  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:54.616952  157493 retry.go:31] will retry after 1.056936741s: waiting for machine to come up
	I0729 19:11:55.675047  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:55.675491  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:55.675514  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:55.675442  157493 retry.go:31] will retry after 1.232760679s: waiting for machine to come up
	I0729 19:11:56.909745  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:56.910283  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:56.910309  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:56.910228  157493 retry.go:31] will retry after 1.432617964s: waiting for machine to come up
	I0729 19:11:58.344399  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:58.345006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:58.345033  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:58.344957  157493 retry.go:31] will retry after 1.914060621s: waiting for machine to come up
	I0729 19:12:00.262146  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:00.262707  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:00.262739  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:00.262638  157493 retry.go:31] will retry after 2.77447957s: waiting for machine to come up
	I0729 19:12:03.039059  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:03.039693  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:03.039718  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:03.039650  157493 retry.go:31] will retry after 2.755354142s: waiting for machine to come up
	I0729 19:12:05.797890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:05.798279  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:05.798306  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:05.798234  157493 retry.go:31] will retry after 4.501451096s: waiting for machine to come up
	I0729 19:12:10.304116  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304586  156414 main.go:141] libmachine: (embed-certs-368536) Found IP for machine: 192.168.50.95
	I0729 19:12:10.304616  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has current primary IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304626  156414 main.go:141] libmachine: (embed-certs-368536) Reserving static IP address...
	I0729 19:12:10.305096  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.305134  156414 main.go:141] libmachine: (embed-certs-368536) DBG | skip adding static IP to network mk-embed-certs-368536 - found existing host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"}
	I0729 19:12:10.305151  156414 main.go:141] libmachine: (embed-certs-368536) Reserved static IP address: 192.168.50.95
	I0729 19:12:10.305166  156414 main.go:141] libmachine: (embed-certs-368536) Waiting for SSH to be available...
	I0729 19:12:10.305184  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Getting to WaitForSSH function...
	I0729 19:12:10.307568  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.307936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.307972  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.308079  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH client type: external
	I0729 19:12:10.308110  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa (-rw-------)
	I0729 19:12:10.308140  156414 main.go:141] libmachine: (embed-certs-368536) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:12:10.308157  156414 main.go:141] libmachine: (embed-certs-368536) DBG | About to run SSH command:
	I0729 19:12:10.308170  156414 main.go:141] libmachine: (embed-certs-368536) DBG | exit 0
	I0729 19:12:10.436958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | SSH cmd err, output: <nil>: 
	I0729 19:12:10.437313  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetConfigRaw
	I0729 19:12:10.437962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.440164  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440520  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.440545  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440930  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:12:10.441184  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:12:10.441207  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:10.441430  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.443802  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444155  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.444187  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444375  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.444541  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444719  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444897  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.445064  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.445289  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.445300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:12:10.557371  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:12:10.557405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557675  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:12:10.557706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557905  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.560444  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.560793  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.560819  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.561027  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.561249  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561442  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.561783  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.561990  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.562006  156414 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-368536 && echo "embed-certs-368536" | sudo tee /etc/hostname
	I0729 19:12:10.688844  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-368536
	
	I0729 19:12:10.688891  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.691701  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.692102  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692293  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.692468  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692660  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692821  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.693024  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.693222  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.693245  156414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-368536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-368536/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-368536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:12:10.814346  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:12:10.814376  156414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 19:12:10.814400  156414 buildroot.go:174] setting up certificates
	I0729 19:12:10.814409  156414 provision.go:84] configureAuth start
	I0729 19:12:10.814417  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.814706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.817503  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.817859  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.817886  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.818025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.820077  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820443  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.820473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820617  156414 provision.go:143] copyHostCerts
	I0729 19:12:10.820703  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 19:12:10.820713  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 19:12:10.820781  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 19:12:10.820905  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 19:12:10.820914  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 19:12:10.820943  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 19:12:10.821005  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 19:12:10.821013  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 19:12:10.821041  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 19:12:10.821115  156414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.embed-certs-368536 san=[127.0.0.1 192.168.50.95 embed-certs-368536 localhost minikube]
	I0729 19:12:10.867595  156414 provision.go:177] copyRemoteCerts
	I0729 19:12:10.867661  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:12:10.867685  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.870501  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.870857  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.870881  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.871063  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.871267  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.871428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.871595  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:10.956269  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 19:12:10.981554  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:12:11.006055  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:12:11.031709  156414 provision.go:87] duration metric: took 217.286178ms to configureAuth
	I0729 19:12:11.031746  156414 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:12:11.031924  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:12:11.032088  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.034772  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035129  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.035159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.035583  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035737  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035863  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.036016  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.036203  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.036218  156414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:12:11.321782  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:12:11.321810  156414 machine.go:97] duration metric: took 880.608535ms to provisionDockerMachine
	I0729 19:12:11.321823  156414 start.go:293] postStartSetup for "embed-certs-368536" (driver="kvm2")
	I0729 19:12:11.321834  156414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:12:11.321850  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.322243  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:12:11.322281  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.325081  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325437  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.325459  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325612  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.325841  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.326025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.326177  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.412548  156414 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:12:11.417360  156414 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:12:11.417386  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 19:12:11.417466  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 19:12:11.417538  156414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 19:12:11.417632  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:12:11.429176  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:11.457333  156414 start.go:296] duration metric: took 135.490005ms for postStartSetup
	I0729 19:12:11.457401  156414 fix.go:56] duration metric: took 20.017075153s for fixHost
	I0729 19:12:11.457428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.460243  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460653  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.460702  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460873  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.461060  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461233  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461408  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.461586  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.461763  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.461779  156414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:12:11.573697  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722280331.531987438
	
	I0729 19:12:11.573724  156414 fix.go:216] guest clock: 1722280331.531987438
	I0729 19:12:11.573735  156414 fix.go:229] Guest: 2024-07-29 19:12:11.531987438 +0000 UTC Remote: 2024-07-29 19:12:11.457406225 +0000 UTC m=+302.608153452 (delta=74.581213ms)
	I0729 19:12:11.573758  156414 fix.go:200] guest clock delta is within tolerance: 74.581213ms
	I0729 19:12:11.573763  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 20.133485011s
	I0729 19:12:11.573782  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.574056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:11.576405  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576760  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.576798  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576988  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577479  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577644  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577737  156414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:12:11.577799  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.577840  156414 ssh_runner.go:195] Run: cat /version.json
	I0729 19:12:11.577869  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.580473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580564  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580840  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580921  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.581056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581203  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581358  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581369  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581554  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581564  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581669  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.581732  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.662254  156414 ssh_runner.go:195] Run: systemctl --version
	I0729 19:12:11.688214  156414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:12:11.838322  156414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:12:11.844435  156414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:12:11.844521  156414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:12:11.859899  156414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:12:11.859923  156414 start.go:495] detecting cgroup driver to use...
	I0729 19:12:11.859990  156414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:12:11.876768  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:12:11.890508  156414 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:12:11.890584  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:12:11.904817  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:12:11.919251  156414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:12:12.053205  156414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:12:12.205975  156414 docker.go:233] disabling docker service ...
	I0729 19:12:12.206041  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:12:12.222129  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:12:12.235288  156414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:12:12.386940  156414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:12:12.503688  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:12:12.518539  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:12:12.538744  156414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:12:12.538805  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.549615  156414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:12:12.549683  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.560565  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.571814  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.582852  156414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:12:12.595237  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.607232  156414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.626183  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.637202  156414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:12:12.647141  156414 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:12:12.647204  156414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:12:12.661099  156414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:12:12.671936  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:12.795418  156414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:12:12.934125  156414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:12:12.934220  156414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:12:12.939058  156414 start.go:563] Will wait 60s for crictl version
	I0729 19:12:12.939123  156414 ssh_runner.go:195] Run: which crictl
	I0729 19:12:12.943972  156414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:12:12.990564  156414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:12:12.990672  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.018852  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.053593  156414 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:12:13.054890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:13.057601  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.057994  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:13.058025  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.058229  156414 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:12:13.062303  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:13.075815  156414 kubeadm.go:883] updating cluster {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:12:13.075989  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:12:13.076042  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:13.112073  156414 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:12:13.112136  156414 ssh_runner.go:195] Run: which lz4
	I0729 19:12:13.116209  156414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:12:13.120509  156414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:12:13.120545  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:12:14.521429  156414 crio.go:462] duration metric: took 1.405252765s to copy over tarball
	I0729 19:12:14.521516  156414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:12:16.740086  156414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218529852s)
	I0729 19:12:16.740129  156414 crio.go:469] duration metric: took 2.218665992s to extract the tarball
	I0729 19:12:16.740140  156414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:12:16.779302  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:16.825787  156414 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:12:16.825823  156414 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:12:16.825832  156414 kubeadm.go:934] updating node { 192.168.50.95 8443 v1.30.3 crio true true} ...
	I0729 19:12:16.825972  156414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-368536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:12:16.826034  156414 ssh_runner.go:195] Run: crio config
	I0729 19:12:16.873701  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:16.873734  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:16.873752  156414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:12:16.873776  156414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-368536 NodeName:embed-certs-368536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:12:16.873923  156414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-368536"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:12:16.873987  156414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:12:16.884427  156414 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:12:16.884530  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:12:16.895097  156414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 19:12:16.914112  156414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:12:16.931797  156414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 19:12:16.949824  156414 ssh_runner.go:195] Run: grep 192.168.50.95	control-plane.minikube.internal$ /etc/hosts
	I0729 19:12:16.953765  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:16.967138  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:17.088789  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:12:17.108577  156414 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536 for IP: 192.168.50.95
	I0729 19:12:17.108601  156414 certs.go:194] generating shared ca certs ...
	I0729 19:12:17.108623  156414 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:12:17.108831  156414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 19:12:17.109076  156414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 19:12:17.109118  156414 certs.go:256] generating profile certs ...
	I0729 19:12:17.109296  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/client.key
	I0729 19:12:17.109394  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key.77ca8755
	I0729 19:12:17.109448  156414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key
	I0729 19:12:17.109619  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 19:12:17.109651  156414 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 19:12:17.109661  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 19:12:17.109688  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 19:12:17.109722  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:12:17.109756  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 19:12:17.109819  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:17.110926  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:12:17.142003  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 19:12:17.176798  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:12:17.204272  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 19:12:17.244558  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:12:17.281935  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:12:17.306456  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:12:17.332576  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:12:17.357372  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 19:12:17.380363  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 19:12:17.403737  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:12:17.428493  156414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:12:17.445767  156414 ssh_runner.go:195] Run: openssl version
	I0729 19:12:17.451514  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 19:12:17.463663  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469467  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469523  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.475754  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:12:17.487617  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:12:17.498699  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503205  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503257  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.508880  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:12:17.519570  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 19:12:17.530630  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535004  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535046  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.540647  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 19:12:17.551719  156414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:12:17.556199  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:12:17.562469  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:12:17.568561  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:12:17.574653  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:12:17.580458  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:12:17.586392  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:12:17.592304  156414 kubeadm.go:392] StartCluster: {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:12:17.592422  156414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:12:17.592467  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.631628  156414 cri.go:89] found id: ""
	I0729 19:12:17.631701  156414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:12:17.642636  156414 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:12:17.642665  156414 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:12:17.642731  156414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:12:17.652551  156414 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:12:17.653603  156414 kubeconfig.go:125] found "embed-certs-368536" server: "https://192.168.50.95:8443"
	I0729 19:12:17.656022  156414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:12:17.666987  156414 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.95
	I0729 19:12:17.667014  156414 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:12:17.667026  156414 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:12:17.667065  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.709534  156414 cri.go:89] found id: ""
	I0729 19:12:17.709598  156414 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:12:17.729709  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:12:17.739968  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:12:17.739990  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:12:17.740051  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:12:17.749727  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:12:17.749794  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:12:17.760013  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:12:17.781070  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:12:17.781135  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:12:17.794747  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.805862  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:12:17.805938  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.815892  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:12:17.825005  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:12:17.825072  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:12:17.834745  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:12:17.844191  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:17.963586  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.325254  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.361626819s)
	I0729 19:12:19.325289  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.537565  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.611697  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.710291  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:12:19.710408  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.210809  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.711234  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.727361  156414 api_server.go:72] duration metric: took 1.017067714s to wait for apiserver process to appear ...
	I0729 19:12:20.727396  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:12:20.727432  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:20.727961  156414 api_server.go:269] stopped: https://192.168.50.95:8443/healthz: Get "https://192.168.50.95:8443/healthz": dial tcp 192.168.50.95:8443: connect: connection refused
	I0729 19:12:21.228408  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.622048  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.622093  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.622106  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.628174  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.628195  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.728403  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.732920  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:23.732969  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.227954  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.233109  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.233143  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.727641  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.735127  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.735156  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.227686  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.236914  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.236947  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.728500  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.735276  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.735307  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.227824  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.232439  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.232471  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.728521  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.732952  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.732989  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:27.227504  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:27.232166  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:12:27.238505  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:12:27.238531  156414 api_server.go:131] duration metric: took 6.511128129s to wait for apiserver health ...
	I0729 19:12:27.238541  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:27.238560  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:27.240943  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:12:27.242526  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:12:27.254578  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:12:27.274700  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:12:27.289359  156414 system_pods.go:59] 8 kube-system pods found
	I0729 19:12:27.289412  156414 system_pods.go:61] "coredns-7db6d8ff4d-dww2j" [31418aeb-98be-4b43-a687-5c8f64ec5e9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:12:27.289424  156414 system_pods.go:61] "etcd-embed-certs-368536" [0a7aac00-9161-473a-87f1-2702211beaac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:12:27.289443  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [d6638a87-43df-457e-b335-1ab2aa03f421] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:12:27.289469  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [76fefc23-7794-40fc-845a-73d83eb55450] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:12:27.289478  156414 system_pods.go:61] "kube-proxy-9xwt2" [f637605b-9bc2-4922-b801-04f681a81e7c] Running
	I0729 19:12:27.289485  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [476e1d42-6610-44b5-b77b-b25537f044eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:12:27.289495  156414 system_pods.go:61] "metrics-server-569cc877fc-xnkwq" [19328b03-8c7d-499f-9a30-31f023605e49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:12:27.289500  156414 system_pods.go:61] "storage-provisioner" [b049d065-fbb1-48b6-a377-2938a0519c78] Running
	I0729 19:12:27.289513  156414 system_pods.go:74] duration metric: took 14.789212ms to wait for pod list to return data ...
	I0729 19:12:27.289525  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:12:27.292692  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:12:27.292716  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:12:27.292809  156414 node_conditions.go:105] duration metric: took 3.278515ms to run NodePressure ...
	I0729 19:12:27.292825  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:27.563167  156414 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567532  156414 kubeadm.go:739] kubelet initialised
	I0729 19:12:27.567552  156414 kubeadm.go:740] duration metric: took 4.363687ms waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567566  156414 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:12:27.573549  156414 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:29.580511  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:32.080352  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:34.079542  156414 pod_ready.go:92] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:34.079564  156414 pod_ready.go:81] duration metric: took 6.505994438s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:34.079574  156414 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:36.086785  156414 pod_ready.go:102] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:37.089587  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.089611  156414 pod_ready.go:81] duration metric: took 3.010030448s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.089621  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093814  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.093833  156414 pod_ready.go:81] duration metric: took 4.206508ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093842  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097603  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.097621  156414 pod_ready.go:81] duration metric: took 3.772149ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097633  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101696  156414 pod_ready.go:92] pod "kube-proxy-9xwt2" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.101720  156414 pod_ready.go:81] duration metric: took 4.078653ms for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101732  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107711  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.107728  156414 pod_ready.go:81] duration metric: took 5.989066ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107738  156414 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:39.113796  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:41.115401  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:43.614461  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:45.614900  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:48.113738  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:50.114364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:52.114790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:54.613472  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:56.613889  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:59.114206  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:01.614516  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:04.114859  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:06.114993  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:08.615213  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:11.114015  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:13.114342  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:15.614423  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:18.114442  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:20.614465  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:23.117909  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:25.614155  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:28.114746  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:30.613689  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:32.614790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:35.113716  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:37.116362  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:39.614545  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:42.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:44.114654  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:46.114765  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:48.615523  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:51.114096  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:53.114180  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:55.614278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:58.114138  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:00.613679  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:02.614878  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:05.117278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:07.614025  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:09.614681  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:12.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:14.114458  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:16.614364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:19.114533  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:21.613756  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:24.114325  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:26.614276  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:29.114137  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:31.114274  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:33.115749  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:35.614067  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:37.614374  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:39.615618  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:42.114139  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:44.114503  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:46.114624  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:48.613926  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:50.614527  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:53.115129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:55.613563  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:57.615164  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:59.616129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:02.114384  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:04.114621  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:06.114864  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:08.115242  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:10.613949  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:13.115359  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:15.614560  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:17.615109  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:20.114341  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:22.115253  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:24.119792  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:26.614361  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:29.113806  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:31.114150  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:33.614207  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:35.616204  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:38.113264  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:40.615054  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:42.615127  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:45.115119  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:47.613589  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:49.613803  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:51.615235  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:54.113908  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:56.614614  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:59.114193  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:01.614642  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:04.114186  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:06.614156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:08.614216  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:10.615368  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:13.116263  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:15.613987  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:17.614183  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:19.617124  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:22.114156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:24.613643  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:26.613720  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:28.616174  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:31.114289  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:33.114818  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:35.614735  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:37.107998  156414 pod_ready.go:81] duration metric: took 4m0.000241864s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	E0729 19:16:37.108045  156414 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:16:37.108068  156414 pod_ready.go:38] duration metric: took 4m9.540493845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:16:37.108105  156414 kubeadm.go:597] duration metric: took 4m19.465427343s to restartPrimaryControlPlane
	W0729 19:16:37.108167  156414 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:16:37.108196  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:17:08.548650  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.44042578s)
	I0729 19:17:08.548730  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:17:08.564620  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:17:08.575061  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:17:08.585537  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:17:08.585566  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:17:08.585610  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:17:08.594641  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:17:08.594702  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:17:08.604434  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:17:08.613126  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:17:08.613177  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:17:08.622123  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:17:08.630620  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:17:08.630661  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:17:08.640140  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:17:08.648712  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:17:08.648768  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:17:08.658010  156414 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:17:08.709849  156414 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:17:08.709998  156414 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:17:08.850515  156414 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:17:08.850632  156414 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:17:08.850769  156414 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:17:09.057782  156414 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:17:09.059421  156414 out.go:204]   - Generating certificates and keys ...
	I0729 19:17:09.059494  156414 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:17:09.059566  156414 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:17:09.059636  156414 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:17:09.062277  156414 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:17:09.062401  156414 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:17:09.062475  156414 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:17:09.062526  156414 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:17:09.062616  156414 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:17:09.062695  156414 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:17:09.062807  156414 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:17:09.062863  156414 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:17:09.062933  156414 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:17:09.426782  156414 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:17:09.599745  156414 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:17:09.741530  156414 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:17:09.907315  156414 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:17:10.118045  156414 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:17:10.118623  156414 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:17:10.121594  156414 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:17:10.124052  156414 out.go:204]   - Booting up control plane ...
	I0729 19:17:10.124173  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:17:10.124267  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:17:10.124374  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:17:10.144903  156414 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:17:10.145010  156414 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:17:10.145047  156414 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:17:10.278905  156414 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:17:10.279025  156414 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:17:11.280964  156414 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002120381s
	I0729 19:17:11.281070  156414 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:17:15.782460  156414 kubeadm.go:310] [api-check] The API server is healthy after 4.501562605s
	I0729 19:17:15.804614  156414 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:17:15.822230  156414 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:17:15.849613  156414 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:17:15.849870  156414 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-368536 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:17:15.861910  156414 kubeadm.go:310] [bootstrap-token] Using token: zhramo.fqhnhxuylehyq043
	I0729 19:17:15.863215  156414 out.go:204]   - Configuring RBAC rules ...
	I0729 19:17:15.863352  156414 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:17:15.870893  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:17:15.886779  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:17:15.889933  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:17:15.893111  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:17:15.895970  156414 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:17:16.200928  156414 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:17:16.625621  156414 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:17:17.195772  156414 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:17:17.197712  156414 kubeadm.go:310] 
	I0729 19:17:17.197780  156414 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:17:17.197791  156414 kubeadm.go:310] 
	I0729 19:17:17.197874  156414 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:17:17.197885  156414 kubeadm.go:310] 
	I0729 19:17:17.197925  156414 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:17:17.198023  156414 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:17:17.198108  156414 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:17:17.198120  156414 kubeadm.go:310] 
	I0729 19:17:17.198190  156414 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:17:17.198200  156414 kubeadm.go:310] 
	I0729 19:17:17.198258  156414 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:17:17.198267  156414 kubeadm.go:310] 
	I0729 19:17:17.198347  156414 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:17:17.198451  156414 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:17:17.198529  156414 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:17:17.198539  156414 kubeadm.go:310] 
	I0729 19:17:17.198633  156414 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:17:17.198750  156414 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:17:17.198761  156414 kubeadm.go:310] 
	I0729 19:17:17.198895  156414 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zhramo.fqhnhxuylehyq043 \
	I0729 19:17:17.199041  156414 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f \
	I0729 19:17:17.199074  156414 kubeadm.go:310] 	--control-plane 
	I0729 19:17:17.199081  156414 kubeadm.go:310] 
	I0729 19:17:17.199199  156414 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:17:17.199210  156414 kubeadm.go:310] 
	I0729 19:17:17.199327  156414 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zhramo.fqhnhxuylehyq043 \
	I0729 19:17:17.199478  156414 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f 
	I0729 19:17:17.200591  156414 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:17:17.200629  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:17:17.200642  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:17:17.202541  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:17:17.203847  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:17:17.214711  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:17:17.233233  156414 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:17:17.233330  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:17.233332  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-368536 minikube.k8s.io/updated_at=2024_07_29T19_17_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=embed-certs-368536 minikube.k8s.io/primary=true
	I0729 19:17:17.265931  156414 ops.go:34] apiserver oom_adj: -16
	I0729 19:17:17.410594  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:17.911585  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:18.410650  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:18.911432  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:19.411062  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:19.911629  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:20.411050  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:20.911004  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:21.411031  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:21.910787  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:22.411228  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:22.911181  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:23.410624  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:23.910844  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:24.411409  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:24.910745  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:25.410675  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:25.910901  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:26.411562  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:26.911505  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:27.411552  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:27.910916  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:28.410868  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:28.911466  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.410633  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.911613  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.992725  156414 kubeadm.go:1113] duration metric: took 12.75946311s to wait for elevateKubeSystemPrivileges
	I0729 19:17:29.992767  156414 kubeadm.go:394] duration metric: took 5m12.400472687s to StartCluster
	I0729 19:17:29.992793  156414 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:17:29.992902  156414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:17:29.994489  156414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:17:29.994792  156414 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:17:29.994828  156414 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:17:29.994917  156414 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-368536"
	I0729 19:17:29.994954  156414 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-368536"
	I0729 19:17:29.994957  156414 addons.go:69] Setting default-storageclass=true in profile "embed-certs-368536"
	W0729 19:17:29.994966  156414 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:17:29.994969  156414 addons.go:69] Setting metrics-server=true in profile "embed-certs-368536"
	I0729 19:17:29.995004  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:29.995003  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:17:29.995028  156414 addons.go:234] Setting addon metrics-server=true in "embed-certs-368536"
	W0729 19:17:29.995041  156414 addons.go:243] addon metrics-server should already be in state true
	I0729 19:17:29.994986  156414 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-368536"
	I0729 19:17:29.995073  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:29.995409  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995457  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.995460  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995487  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.995487  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995636  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.997279  156414 out.go:177] * Verifying Kubernetes components...
	I0729 19:17:29.998614  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:17:30.011510  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I0729 19:17:30.011717  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0729 19:17:30.011970  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.012063  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.012480  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.012505  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.012626  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.012651  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.012967  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.013105  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.013284  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.013527  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.013574  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.014086  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0729 19:17:30.014502  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.015001  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.015018  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.015505  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.016720  156414 addons.go:234] Setting addon default-storageclass=true in "embed-certs-368536"
	W0729 19:17:30.016740  156414 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:17:30.016770  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:30.017091  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.017123  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.017432  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.017477  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.034798  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0729 19:17:30.035372  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.036179  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.036207  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.037055  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37901
	I0729 19:17:30.037161  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0729 19:17:30.036581  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.037493  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.037581  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.037636  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.038047  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.038056  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.038073  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.038217  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.038403  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.038623  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.038627  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.039185  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.039221  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.040574  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.040687  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.042879  156414 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:17:30.042873  156414 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:17:30.044279  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:17:30.044298  156414 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:17:30.044324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.044544  156414 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:17:30.044593  156414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:17:30.044621  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.048075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048402  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048442  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.048462  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048613  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.048761  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.048845  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.048890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.048914  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.049132  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.049289  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.049306  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.049441  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.049593  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.055718  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34081
	I0729 19:17:30.056086  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.056521  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.056546  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.056931  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.057098  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.058559  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.058795  156414 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:17:30.058810  156414 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:17:30.058825  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.061253  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.061842  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.061880  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.061900  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.062053  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.062195  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.062346  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.192595  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:17:30.208960  156414 node_ready.go:35] waiting up to 6m0s for node "embed-certs-368536" to be "Ready" ...
	I0729 19:17:30.216230  156414 node_ready.go:49] node "embed-certs-368536" has status "Ready":"True"
	I0729 19:17:30.216247  156414 node_ready.go:38] duration metric: took 7.255724ms for node "embed-certs-368536" to be "Ready" ...
	I0729 19:17:30.216256  156414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:17:30.219988  156414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.224074  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.224099  156414 pod_ready.go:81] duration metric: took 4.088257ms for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.224109  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.228389  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.228409  156414 pod_ready.go:81] duration metric: took 4.292723ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.228417  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.233616  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.233634  156414 pod_ready.go:81] duration metric: took 5.212376ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.233642  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.242933  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.242951  156414 pod_ready.go:81] duration metric: took 9.302507ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.242959  156414 pod_ready.go:38] duration metric: took 26.692394ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:17:30.242973  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:17:30.243016  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:17:30.261484  156414 api_server.go:72] duration metric: took 266.652937ms to wait for apiserver process to appear ...
	I0729 19:17:30.261513  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:17:30.261534  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:17:30.269760  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:17:30.270848  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:17:30.270872  156414 api_server.go:131] duration metric: took 9.352433ms to wait for apiserver health ...
	I0729 19:17:30.270880  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:17:30.312744  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:17:30.317547  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:17:30.317570  156414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:17:30.332468  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:17:30.352498  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:17:30.352531  156414 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:17:30.392028  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:17:30.392055  156414 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:17:30.413559  156414 system_pods.go:59] 4 kube-system pods found
	I0729 19:17:30.413586  156414 system_pods.go:61] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:30.413591  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:30.413595  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:30.413598  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:30.413603  156414 system_pods.go:74] duration metric: took 142.71846ms to wait for pod list to return data ...
	I0729 19:17:30.413610  156414 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:17:30.424371  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:17:30.615212  156414 default_sa.go:45] found service account: "default"
	I0729 19:17:30.615237  156414 default_sa.go:55] duration metric: took 201.621467ms for default service account to be created ...
	I0729 19:17:30.615246  156414 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:17:30.831144  156414 system_pods.go:86] 4 kube-system pods found
	I0729 19:17:30.831175  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:30.831182  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:30.831186  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:30.831190  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:30.831210  156414 retry.go:31] will retry after 301.650623ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.127532  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127599  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127595  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127620  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127910  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.127925  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.127935  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127943  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.127974  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.127985  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.127999  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.128008  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.128212  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.128221  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.128440  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.128455  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.128467  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.155504  156414 system_pods.go:86] 8 kube-system pods found
	I0729 19:17:31.155543  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.155559  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.155565  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.155570  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.155575  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.155580  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:17:31.155586  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.155590  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending
	I0729 19:17:31.155606  156414 retry.go:31] will retry after 310.574298ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.159525  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.159546  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.160952  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.160961  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.160976  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.346360  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.346390  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.346700  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.346718  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.346732  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.346742  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.347006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.347052  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.347059  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.347075  156414 addons.go:475] Verifying addon metrics-server=true in "embed-certs-368536"
	I0729 19:17:31.348884  156414 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:17:31.350473  156414 addons.go:510] duration metric: took 1.355642198s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:17:31.473514  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:31.473553  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.473561  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.473567  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.473573  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.473578  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.473583  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:17:31.473587  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.473596  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:31.473605  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:31.473622  156414 retry.go:31] will retry after 446.790872ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.928348  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:31.928381  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.928389  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.928396  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.928401  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.928406  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.928409  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:31.928413  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.928420  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:31.928429  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:31.928444  156414 retry.go:31] will retry after 467.830899ms: missing components: kube-dns
	I0729 19:17:32.403619  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:32.403649  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:32.403659  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:32.403665  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:32.403670  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:32.403676  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:32.403683  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:32.403689  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:32.403697  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:32.403706  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:32.403729  156414 retry.go:31] will retry after 745.010861ms: missing components: kube-dns
	I0729 19:17:33.163660  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:33.163697  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:33.163710  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:33.163719  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:33.163733  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:33.163740  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:33.163746  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:33.163751  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:33.163761  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:33.163770  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Running
	I0729 19:17:33.163791  156414 retry.go:31] will retry after 658.944312ms: missing components: kube-dns
	I0729 19:17:33.830608  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:33.830643  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Running
	I0729 19:17:33.830650  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Running
	I0729 19:17:33.830656  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:33.830662  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:33.830670  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:33.830675  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:33.830682  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:33.830692  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:33.830703  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Running
	I0729 19:17:33.830714  156414 system_pods.go:126] duration metric: took 3.215460876s to wait for k8s-apps to be running ...
	I0729 19:17:33.830726  156414 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:17:33.830824  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:17:33.847810  156414 system_svc.go:56] duration metric: took 17.074145ms WaitForService to wait for kubelet
	I0729 19:17:33.847837  156414 kubeadm.go:582] duration metric: took 3.853011216s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:17:33.847861  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:17:33.850180  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:17:33.850198  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:17:33.850209  156414 node_conditions.go:105] duration metric: took 2.342951ms to run NodePressure ...
	I0729 19:17:33.850221  156414 start.go:241] waiting for startup goroutines ...
	I0729 19:17:33.850230  156414 start.go:246] waiting for cluster config update ...
	I0729 19:17:33.850242  156414 start.go:255] writing updated cluster config ...
	I0729 19:17:33.850512  156414 ssh_runner.go:195] Run: rm -f paused
	I0729 19:17:33.898396  156414 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:17:33.899771  156414 out.go:177] * Done! kubectl is now configured to use "embed-certs-368536" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.633993283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280755633973058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d02240c-2195-4d21-84d4-8c7bfa9946f7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.634419412Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a20a6c58-23ff-4d1c-a15b-73a2e0eeafc7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.634534998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a20a6c58-23ff-4d1c-a15b-73a2e0eeafc7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.634582883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a20a6c58-23ff-4d1c-a15b-73a2e0eeafc7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.665118400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5ccd425-9ffb-4274-8640-af61274ace69 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.665212379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5ccd425-9ffb-4274-8640-af61274ace69 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.667336918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5caf048-bb5d-40b2-82ed-a0ee9c080e6b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.667779367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280755667753016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5caf048-bb5d-40b2-82ed-a0ee9c080e6b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.668276162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2298900f-06a1-48b1-9b17-51340826f278 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.668348991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2298900f-06a1-48b1-9b17-51340826f278 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.668386156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2298900f-06a1-48b1-9b17-51340826f278 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.701796386Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e74eac7-6677-4e02-9b16-2529f741c8b6 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.701900734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e74eac7-6677-4e02-9b16-2529f741c8b6 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.703166946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b61adaa-286b-4114-b553-fd072546a1c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.703700605Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280755703667361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b61adaa-286b-4114-b553-fd072546a1c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.704300901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca90118c-d1de-43e5-ac6e-78ef9e0131b6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.704372387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca90118c-d1de-43e5-ac6e-78ef9e0131b6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.704405279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ca90118c-d1de-43e5-ac6e-78ef9e0131b6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.735618377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2809cae5-38df-489c-8c7d-2bba771e2208 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.735709906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2809cae5-38df-489c-8c7d-2bba771e2208 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.736861096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fbbc69f-6c44-4765-85bf-0890e248092b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.737237118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280755737213235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fbbc69f-6c44-4765-85bf-0890e248092b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.737692512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=480f92fb-2612-47a6-8179-769f9cbc1896 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.737744353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=480f92fb-2612-47a6-8179-769f9cbc1896 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:19:15 old-k8s-version-834964 crio[654]: time="2024-07-29 19:19:15.737784988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=480f92fb-2612-47a6-8179-769f9cbc1896 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057102] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044455] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.946781] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.486761] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.581640] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.337375] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.060537] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068240] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.202307] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.149657] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.264127] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +5.997739] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.070545] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.872502] systemd-fstab-generator[966]: Ignoring "noauto" option for root device
	[ +12.278643] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 19:03] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[Jul29 19:05] systemd-fstab-generator[5309]: Ignoring "noauto" option for root device
	[  +0.065530] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:19:15 up 20 min,  0 users,  load average: 0.16, 0.11, 0.07
	Linux old-k8s-version-834964 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000b873b0)
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]: goroutine 158 [select]:
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d05ef0, 0x4f0ac20, 0xc000b7ec80, 0x1, 0xc00009e0c0)
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002548c0, 0xc00009e0c0)
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000693820, 0xc000bba780)
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6816]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 29 19:19:11 old-k8s-version-834964 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 19:19:11 old-k8s-version-834964 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 19:19:11 old-k8s-version-834964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 140.
	Jul 29 19:19:11 old-k8s-version-834964 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 19:19:11 old-k8s-version-834964 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6825]: I0729 19:19:11.771220    6825 server.go:416] Version: v1.20.0
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6825]: I0729 19:19:11.771531    6825 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6825]: I0729 19:19:11.773363    6825 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6825]: I0729 19:19:11.774526    6825 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 29 19:19:11 old-k8s-version-834964 kubelet[6825]: W0729 19:19:11.774553    6825 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-834964 -n old-k8s-version-834964
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 2 (218.626277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-834964" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (151.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-368536 -n embed-certs-368536
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 19:26:34.429447189 +0000 UTC m=+6804.400150959
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-368536 -n embed-certs-368536
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-368536 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-368536 logs -n 25: (1.21089357s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-612270       | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 19:04 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-834964             | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-974855                              | cert-expiration-974855       | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:01 UTC |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-453780             | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-453780                  | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-453780 image list                           | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148539 | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | disable-driver-mounts-148539                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-368536            | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC | 29 Jul 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-368536                 | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:07 UTC | 29 Jul 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 19:19 UTC | 29 Jul 24 19:19 UTC |
	| delete  | -p no-preload-524369                                   | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 19:20 UTC | 29 Jul 24 19:20 UTC |
	| delete  | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 19:20 UTC | 29 Jul 24 19:20 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:07:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:07:08.883432  156414 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:07:08.883769  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883781  156414 out.go:304] Setting ErrFile to fd 2...
	I0729 19:07:08.883788  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883976  156414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 19:07:08.884534  156414 out.go:298] Setting JSON to false
	I0729 19:07:08.885578  156414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13749,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:07:08.885642  156414 start.go:139] virtualization: kvm guest
	I0729 19:07:08.888023  156414 out.go:177] * [embed-certs-368536] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:07:08.889601  156414 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 19:07:08.889601  156414 notify.go:220] Checking for updates...
	I0729 19:07:08.892436  156414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:07:08.893966  156414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:07:08.895257  156414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 19:07:08.896516  156414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:07:08.897746  156414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:07:08.899225  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:07:08.899588  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.899642  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.914943  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0729 19:07:08.915307  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.915885  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.915905  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.916305  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.916486  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.916703  156414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:07:08.917034  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.917074  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.931497  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0729 19:07:08.931857  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.932280  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.932300  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.932640  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.932819  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.968386  156414 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:07:08.969538  156414 start.go:297] selected driver: kvm2
	I0729 19:07:08.969556  156414 start.go:901] validating driver "kvm2" against &{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.969681  156414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:07:08.970358  156414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.970428  156414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:07:08.986808  156414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:07:08.987271  156414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:07:08.987351  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:07:08.987370  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:07:08.987443  156414 start.go:340] cluster config:
	{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.987605  156414 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.989202  156414 out.go:177] * Starting "embed-certs-368536" primary control-plane node in "embed-certs-368536" cluster
	I0729 19:07:08.990452  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:07:08.990496  156414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:07:08.990510  156414 cache.go:56] Caching tarball of preloaded images
	I0729 19:07:08.990604  156414 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:07:08.990618  156414 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:07:08.990746  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:07:08.990949  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:07:08.990994  156414 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:07:08.991013  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:07:08.991023  156414 fix.go:54] fixHost starting: 
	I0729 19:07:08.991314  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.991356  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:09.006100  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 19:07:09.006507  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:09.007034  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:09.007060  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:09.007401  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:09.007594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.007758  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:07:09.009424  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Running err=<nil>
	W0729 19:07:09.009448  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:07:09.011240  156414 out.go:177] * Updating the running kvm2 "embed-certs-368536" VM ...
	I0729 19:07:09.012305  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:07:09.012324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.012506  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:07:09.014924  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015334  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:07:09.015362  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015507  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:07:09.015664  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015796  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:07:09.016106  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:07:09.016288  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:07:09.016300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:07:11.913157  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:14.985074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:21.065092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:24.137179  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:30.217186  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:33.289154  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:41.417004  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:41.417303  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417326  152077 kubeadm.go:310] 
	I0729 19:07:41.417370  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:07:41.417434  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:07:41.417456  152077 kubeadm.go:310] 
	I0729 19:07:41.417514  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:07:41.417586  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:07:41.417720  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:07:41.417732  152077 kubeadm.go:310] 
	I0729 19:07:41.417870  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:07:41.417917  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:07:41.417972  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:07:41.417982  152077 kubeadm.go:310] 
	I0729 19:07:41.418104  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:07:41.418232  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:07:41.418247  152077 kubeadm.go:310] 
	I0729 19:07:41.418370  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:07:41.418477  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:07:41.418596  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:07:41.418696  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:07:41.418706  152077 kubeadm.go:310] 
	I0729 19:07:41.419442  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:07:41.419562  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:07:41.419660  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:07:41.419767  152077 kubeadm.go:394] duration metric: took 8m3.273724985s to StartCluster
	I0729 19:07:41.419853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:07:41.419923  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:07:41.466895  152077 cri.go:89] found id: ""
	I0729 19:07:41.466922  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.466933  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:07:41.466941  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:07:41.467013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:07:41.500836  152077 cri.go:89] found id: ""
	I0729 19:07:41.500876  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.500888  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:07:41.500896  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:07:41.500949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:07:41.534917  152077 cri.go:89] found id: ""
	I0729 19:07:41.534946  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.534958  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:07:41.534968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:07:41.535038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:07:41.570516  152077 cri.go:89] found id: ""
	I0729 19:07:41.570545  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.570556  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:07:41.570565  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:07:41.570640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:07:41.608833  152077 cri.go:89] found id: ""
	I0729 19:07:41.608881  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.608894  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:07:41.608902  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:07:41.608969  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:07:41.644079  152077 cri.go:89] found id: ""
	I0729 19:07:41.644114  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.644127  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:07:41.644136  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:07:41.644198  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:07:41.678991  152077 cri.go:89] found id: ""
	I0729 19:07:41.679019  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.679028  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:07:41.679035  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:07:41.679088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:07:41.722775  152077 cri.go:89] found id: ""
	I0729 19:07:41.722803  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.722815  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:07:41.722829  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:07:41.722857  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:07:41.841614  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:07:41.841641  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:07:41.841658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:07:41.945679  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:07:41.945716  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:07:41.984505  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:07:41.984536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:07:42.037376  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:07:42.037418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:07:42.051259  152077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:07:42.051310  152077 out.go:239] * 
	W0729 19:07:42.051369  152077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.051390  152077 out.go:239] * 
	W0729 19:07:42.052280  152077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:07:42.055255  152077 out.go:177] 
	W0729 19:07:42.056299  152077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.056362  152077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:07:42.056391  152077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:07:42.057745  152077 out.go:177] 
	I0729 19:07:42.413268  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:45.481071  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:51.561079  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:54.633091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:00.713100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:03.785182  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:09.865116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:12.937102  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:19.017094  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:22.093191  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:28.169116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:31.241130  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:37.321134  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:40.393092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:46.473118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:49.545166  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:55.625086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:58.697184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:04.777113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:07.849165  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:13.933065  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:17.001100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:23.081133  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:26.153086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:32.233178  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:35.305183  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:41.385184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:44.461106  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:50.537120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:53.609150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:59.689091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:02.761193  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:08.845074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:11.917067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:17.993090  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:21.065137  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:27.145098  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:30.217175  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:36.301060  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:39.369118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:45.449082  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:48.521097  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:54.601120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:57.673200  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:03.753116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:06.825136  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:12.905195  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:15.977118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:22.057076  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:25.129144  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:31.209150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:34.281164  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:40.365067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:43.437113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:46.437452  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:11:46.437532  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.437865  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:11:46.437902  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.438117  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:11:46.439690  156414 machine.go:97] duration metric: took 4m37.427371174s to provisionDockerMachine
	I0729 19:11:46.439733  156414 fix.go:56] duration metric: took 4m37.448711854s for fixHost
	I0729 19:11:46.439746  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 4m37.448735558s
	W0729 19:11:46.439780  156414 start.go:714] error starting host: provision: host is not running
	W0729 19:11:46.439926  156414 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:11:46.439941  156414 start.go:729] Will try again in 5 seconds ...
	I0729 19:11:51.440155  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:11:51.440266  156414 start.go:364] duration metric: took 69.381µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:11:51.440311  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:11:51.440322  156414 fix.go:54] fixHost starting: 
	I0729 19:11:51.440735  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:11:51.440764  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:11:51.455959  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0729 19:11:51.456443  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:11:51.457053  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:11:51.457078  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:11:51.457475  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:11:51.457721  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:11:51.457885  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:11:51.459531  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Stopped err=<nil>
	I0729 19:11:51.459558  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	W0729 19:11:51.459736  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:11:51.461603  156414 out.go:177] * Restarting existing kvm2 VM for "embed-certs-368536" ...
	I0729 19:11:51.462768  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Start
	I0729 19:11:51.462973  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring networks are active...
	I0729 19:11:51.463679  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network default is active
	I0729 19:11:51.464155  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network mk-embed-certs-368536 is active
	I0729 19:11:51.464571  156414 main.go:141] libmachine: (embed-certs-368536) Getting domain xml...
	I0729 19:11:51.465359  156414 main.go:141] libmachine: (embed-certs-368536) Creating domain...
	I0729 19:11:51.794918  156414 main.go:141] libmachine: (embed-certs-368536) Waiting to get IP...
	I0729 19:11:51.795679  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:51.796159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:51.796221  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:51.796150  157493 retry.go:31] will retry after 235.729444ms: waiting for machine to come up
	I0729 19:11:52.033554  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.034180  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.034207  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.034103  157493 retry.go:31] will retry after 323.595446ms: waiting for machine to come up
	I0729 19:11:52.359640  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.360204  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.360233  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.360143  157493 retry.go:31] will retry after 341.954873ms: waiting for machine to come up
	I0729 19:11:52.703779  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.704350  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.704373  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.704298  157493 retry.go:31] will retry after 385.738264ms: waiting for machine to come up
	I0729 19:11:53.091976  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.092451  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.092484  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.092397  157493 retry.go:31] will retry after 759.811264ms: waiting for machine to come up
	I0729 19:11:53.853241  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.853799  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.853826  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.853755  157493 retry.go:31] will retry after 761.294244ms: waiting for machine to come up
	I0729 19:11:54.616298  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:54.617009  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:54.617039  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:54.616952  157493 retry.go:31] will retry after 1.056936741s: waiting for machine to come up
	I0729 19:11:55.675047  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:55.675491  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:55.675514  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:55.675442  157493 retry.go:31] will retry after 1.232760679s: waiting for machine to come up
	I0729 19:11:56.909745  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:56.910283  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:56.910309  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:56.910228  157493 retry.go:31] will retry after 1.432617964s: waiting for machine to come up
	I0729 19:11:58.344399  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:58.345006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:58.345033  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:58.344957  157493 retry.go:31] will retry after 1.914060621s: waiting for machine to come up
	I0729 19:12:00.262146  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:00.262707  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:00.262739  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:00.262638  157493 retry.go:31] will retry after 2.77447957s: waiting for machine to come up
	I0729 19:12:03.039059  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:03.039693  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:03.039718  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:03.039650  157493 retry.go:31] will retry after 2.755354142s: waiting for machine to come up
	I0729 19:12:05.797890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:05.798279  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:05.798306  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:05.798234  157493 retry.go:31] will retry after 4.501451096s: waiting for machine to come up
	I0729 19:12:10.304116  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304586  156414 main.go:141] libmachine: (embed-certs-368536) Found IP for machine: 192.168.50.95
	I0729 19:12:10.304616  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has current primary IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304626  156414 main.go:141] libmachine: (embed-certs-368536) Reserving static IP address...
	I0729 19:12:10.305096  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.305134  156414 main.go:141] libmachine: (embed-certs-368536) DBG | skip adding static IP to network mk-embed-certs-368536 - found existing host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"}
	I0729 19:12:10.305151  156414 main.go:141] libmachine: (embed-certs-368536) Reserved static IP address: 192.168.50.95
	I0729 19:12:10.305166  156414 main.go:141] libmachine: (embed-certs-368536) Waiting for SSH to be available...
	I0729 19:12:10.305184  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Getting to WaitForSSH function...
	I0729 19:12:10.307568  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.307936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.307972  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.308079  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH client type: external
	I0729 19:12:10.308110  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa (-rw-------)
	I0729 19:12:10.308140  156414 main.go:141] libmachine: (embed-certs-368536) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:12:10.308157  156414 main.go:141] libmachine: (embed-certs-368536) DBG | About to run SSH command:
	I0729 19:12:10.308170  156414 main.go:141] libmachine: (embed-certs-368536) DBG | exit 0
	I0729 19:12:10.436958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | SSH cmd err, output: <nil>: 
	I0729 19:12:10.437313  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetConfigRaw
	I0729 19:12:10.437962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.440164  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440520  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.440545  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440930  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:12:10.441184  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:12:10.441207  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:10.441430  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.443802  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444155  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.444187  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444375  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.444541  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444719  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444897  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.445064  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.445289  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.445300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:12:10.557371  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:12:10.557405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557675  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:12:10.557706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557905  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.560444  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.560793  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.560819  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.561027  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.561249  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561442  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.561783  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.561990  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.562006  156414 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-368536 && echo "embed-certs-368536" | sudo tee /etc/hostname
	I0729 19:12:10.688844  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-368536
	
	I0729 19:12:10.688891  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.691701  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.692102  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692293  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.692468  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692660  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692821  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.693024  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.693222  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.693245  156414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-368536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-368536/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-368536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:12:10.814346  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:12:10.814376  156414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 19:12:10.814400  156414 buildroot.go:174] setting up certificates
	I0729 19:12:10.814409  156414 provision.go:84] configureAuth start
	I0729 19:12:10.814417  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.814706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.817503  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.817859  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.817886  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.818025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.820077  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820443  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.820473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820617  156414 provision.go:143] copyHostCerts
	I0729 19:12:10.820703  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 19:12:10.820713  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 19:12:10.820781  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 19:12:10.820905  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 19:12:10.820914  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 19:12:10.820943  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 19:12:10.821005  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 19:12:10.821013  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 19:12:10.821041  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 19:12:10.821115  156414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.embed-certs-368536 san=[127.0.0.1 192.168.50.95 embed-certs-368536 localhost minikube]
	I0729 19:12:10.867595  156414 provision.go:177] copyRemoteCerts
	I0729 19:12:10.867661  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:12:10.867685  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.870501  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.870857  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.870881  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.871063  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.871267  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.871428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.871595  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:10.956269  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 19:12:10.981554  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:12:11.006055  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:12:11.031709  156414 provision.go:87] duration metric: took 217.286178ms to configureAuth
	I0729 19:12:11.031746  156414 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:12:11.031924  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:12:11.032088  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.034772  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035129  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.035159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.035583  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035737  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035863  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.036016  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.036203  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.036218  156414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:12:11.321782  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:12:11.321810  156414 machine.go:97] duration metric: took 880.608535ms to provisionDockerMachine
	I0729 19:12:11.321823  156414 start.go:293] postStartSetup for "embed-certs-368536" (driver="kvm2")
	I0729 19:12:11.321834  156414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:12:11.321850  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.322243  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:12:11.322281  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.325081  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325437  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.325459  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325612  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.325841  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.326025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.326177  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.412548  156414 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:12:11.417360  156414 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:12:11.417386  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 19:12:11.417466  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 19:12:11.417538  156414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 19:12:11.417632  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:12:11.429176  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:11.457333  156414 start.go:296] duration metric: took 135.490005ms for postStartSetup
	I0729 19:12:11.457401  156414 fix.go:56] duration metric: took 20.017075153s for fixHost
	I0729 19:12:11.457428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.460243  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460653  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.460702  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460873  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.461060  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461233  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461408  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.461586  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.461763  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.461779  156414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:12:11.573697  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722280331.531987438
	
	I0729 19:12:11.573724  156414 fix.go:216] guest clock: 1722280331.531987438
	I0729 19:12:11.573735  156414 fix.go:229] Guest: 2024-07-29 19:12:11.531987438 +0000 UTC Remote: 2024-07-29 19:12:11.457406225 +0000 UTC m=+302.608153452 (delta=74.581213ms)
	I0729 19:12:11.573758  156414 fix.go:200] guest clock delta is within tolerance: 74.581213ms
	I0729 19:12:11.573763  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 20.133485011s
	I0729 19:12:11.573782  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.574056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:11.576405  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576760  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.576798  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576988  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577479  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577644  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577737  156414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:12:11.577799  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.577840  156414 ssh_runner.go:195] Run: cat /version.json
	I0729 19:12:11.577869  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.580473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580564  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580840  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580921  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.581056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581203  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581358  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581369  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581554  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581564  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581669  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.581732  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.662254  156414 ssh_runner.go:195] Run: systemctl --version
	I0729 19:12:11.688214  156414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:12:11.838322  156414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:12:11.844435  156414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:12:11.844521  156414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:12:11.859899  156414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:12:11.859923  156414 start.go:495] detecting cgroup driver to use...
	I0729 19:12:11.859990  156414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:12:11.876768  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:12:11.890508  156414 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:12:11.890584  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:12:11.904817  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:12:11.919251  156414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:12:12.053205  156414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:12:12.205975  156414 docker.go:233] disabling docker service ...
	I0729 19:12:12.206041  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:12:12.222129  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:12:12.235288  156414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:12:12.386940  156414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:12:12.503688  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:12:12.518539  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:12:12.538744  156414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:12:12.538805  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.549615  156414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:12:12.549683  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.560565  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.571814  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.582852  156414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:12:12.595237  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.607232  156414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.626183  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.637202  156414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:12:12.647141  156414 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:12:12.647204  156414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:12:12.661099  156414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:12:12.671936  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:12.795418  156414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:12:12.934125  156414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:12:12.934220  156414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:12:12.939058  156414 start.go:563] Will wait 60s for crictl version
	I0729 19:12:12.939123  156414 ssh_runner.go:195] Run: which crictl
	I0729 19:12:12.943972  156414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:12:12.990564  156414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:12:12.990672  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.018852  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.053593  156414 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:12:13.054890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:13.057601  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.057994  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:13.058025  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.058229  156414 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:12:13.062303  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:13.075815  156414 kubeadm.go:883] updating cluster {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:12:13.075989  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:12:13.076042  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:13.112073  156414 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:12:13.112136  156414 ssh_runner.go:195] Run: which lz4
	I0729 19:12:13.116209  156414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:12:13.120509  156414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:12:13.120545  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:12:14.521429  156414 crio.go:462] duration metric: took 1.405252765s to copy over tarball
	I0729 19:12:14.521516  156414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:12:16.740086  156414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218529852s)
	I0729 19:12:16.740129  156414 crio.go:469] duration metric: took 2.218665992s to extract the tarball
	I0729 19:12:16.740140  156414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:12:16.779302  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:16.825787  156414 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:12:16.825823  156414 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:12:16.825832  156414 kubeadm.go:934] updating node { 192.168.50.95 8443 v1.30.3 crio true true} ...
	I0729 19:12:16.825972  156414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-368536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:12:16.826034  156414 ssh_runner.go:195] Run: crio config
	I0729 19:12:16.873701  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:16.873734  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:16.873752  156414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:12:16.873776  156414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-368536 NodeName:embed-certs-368536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:12:16.873923  156414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-368536"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:12:16.873987  156414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:12:16.884427  156414 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:12:16.884530  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:12:16.895097  156414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 19:12:16.914112  156414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:12:16.931797  156414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 19:12:16.949824  156414 ssh_runner.go:195] Run: grep 192.168.50.95	control-plane.minikube.internal$ /etc/hosts
	I0729 19:12:16.953765  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:16.967138  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:17.088789  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:12:17.108577  156414 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536 for IP: 192.168.50.95
	I0729 19:12:17.108601  156414 certs.go:194] generating shared ca certs ...
	I0729 19:12:17.108623  156414 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:12:17.108831  156414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 19:12:17.109076  156414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 19:12:17.109118  156414 certs.go:256] generating profile certs ...
	I0729 19:12:17.109296  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/client.key
	I0729 19:12:17.109394  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key.77ca8755
	I0729 19:12:17.109448  156414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key
	I0729 19:12:17.109619  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 19:12:17.109651  156414 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 19:12:17.109661  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 19:12:17.109688  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 19:12:17.109722  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:12:17.109756  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 19:12:17.109819  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:17.110926  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:12:17.142003  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 19:12:17.176798  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:12:17.204272  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 19:12:17.244558  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:12:17.281935  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:12:17.306456  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:12:17.332576  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:12:17.357372  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 19:12:17.380363  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 19:12:17.403737  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:12:17.428493  156414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:12:17.445767  156414 ssh_runner.go:195] Run: openssl version
	I0729 19:12:17.451514  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 19:12:17.463663  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469467  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469523  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.475754  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:12:17.487617  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:12:17.498699  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503205  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503257  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.508880  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:12:17.519570  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 19:12:17.530630  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535004  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535046  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.540647  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 19:12:17.551719  156414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:12:17.556199  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:12:17.562469  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:12:17.568561  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:12:17.574653  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:12:17.580458  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:12:17.586392  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:12:17.592304  156414 kubeadm.go:392] StartCluster: {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:12:17.592422  156414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:12:17.592467  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.631628  156414 cri.go:89] found id: ""
	I0729 19:12:17.631701  156414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:12:17.642636  156414 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:12:17.642665  156414 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:12:17.642731  156414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:12:17.652551  156414 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:12:17.653603  156414 kubeconfig.go:125] found "embed-certs-368536" server: "https://192.168.50.95:8443"
	I0729 19:12:17.656022  156414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:12:17.666987  156414 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.95
	I0729 19:12:17.667014  156414 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:12:17.667026  156414 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:12:17.667065  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.709534  156414 cri.go:89] found id: ""
	I0729 19:12:17.709598  156414 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:12:17.729709  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:12:17.739968  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:12:17.739990  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:12:17.740051  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:12:17.749727  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:12:17.749794  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:12:17.760013  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:12:17.781070  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:12:17.781135  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:12:17.794747  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.805862  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:12:17.805938  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.815892  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:12:17.825005  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:12:17.825072  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:12:17.834745  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:12:17.844191  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:17.963586  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.325254  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.361626819s)
	I0729 19:12:19.325289  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.537565  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.611697  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.710291  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:12:19.710408  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.210809  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.711234  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.727361  156414 api_server.go:72] duration metric: took 1.017067714s to wait for apiserver process to appear ...
	I0729 19:12:20.727396  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:12:20.727432  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:20.727961  156414 api_server.go:269] stopped: https://192.168.50.95:8443/healthz: Get "https://192.168.50.95:8443/healthz": dial tcp 192.168.50.95:8443: connect: connection refused
	I0729 19:12:21.228408  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.622048  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.622093  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.622106  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.628174  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.628195  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.728403  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.732920  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:23.732969  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.227954  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.233109  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.233143  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.727641  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.735127  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.735156  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.227686  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.236914  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.236947  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.728500  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.735276  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.735307  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.227824  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.232439  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.232471  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.728521  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.732952  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.732989  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:27.227504  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:27.232166  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:12:27.238505  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:12:27.238531  156414 api_server.go:131] duration metric: took 6.511128129s to wait for apiserver health ...
	I0729 19:12:27.238541  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:27.238560  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:27.240943  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:12:27.242526  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:12:27.254578  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:12:27.274700  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:12:27.289359  156414 system_pods.go:59] 8 kube-system pods found
	I0729 19:12:27.289412  156414 system_pods.go:61] "coredns-7db6d8ff4d-dww2j" [31418aeb-98be-4b43-a687-5c8f64ec5e9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:12:27.289424  156414 system_pods.go:61] "etcd-embed-certs-368536" [0a7aac00-9161-473a-87f1-2702211beaac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:12:27.289443  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [d6638a87-43df-457e-b335-1ab2aa03f421] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:12:27.289469  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [76fefc23-7794-40fc-845a-73d83eb55450] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:12:27.289478  156414 system_pods.go:61] "kube-proxy-9xwt2" [f637605b-9bc2-4922-b801-04f681a81e7c] Running
	I0729 19:12:27.289485  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [476e1d42-6610-44b5-b77b-b25537f044eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:12:27.289495  156414 system_pods.go:61] "metrics-server-569cc877fc-xnkwq" [19328b03-8c7d-499f-9a30-31f023605e49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:12:27.289500  156414 system_pods.go:61] "storage-provisioner" [b049d065-fbb1-48b6-a377-2938a0519c78] Running
	I0729 19:12:27.289513  156414 system_pods.go:74] duration metric: took 14.789212ms to wait for pod list to return data ...
	I0729 19:12:27.289525  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:12:27.292692  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:12:27.292716  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:12:27.292809  156414 node_conditions.go:105] duration metric: took 3.278515ms to run NodePressure ...
	I0729 19:12:27.292825  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:27.563167  156414 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567532  156414 kubeadm.go:739] kubelet initialised
	I0729 19:12:27.567552  156414 kubeadm.go:740] duration metric: took 4.363687ms waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567566  156414 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:12:27.573549  156414 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:29.580511  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:32.080352  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:34.079542  156414 pod_ready.go:92] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:34.079564  156414 pod_ready.go:81] duration metric: took 6.505994438s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:34.079574  156414 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:36.086785  156414 pod_ready.go:102] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:37.089587  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.089611  156414 pod_ready.go:81] duration metric: took 3.010030448s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.089621  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093814  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.093833  156414 pod_ready.go:81] duration metric: took 4.206508ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093842  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097603  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.097621  156414 pod_ready.go:81] duration metric: took 3.772149ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097633  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101696  156414 pod_ready.go:92] pod "kube-proxy-9xwt2" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.101720  156414 pod_ready.go:81] duration metric: took 4.078653ms for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101732  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107711  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.107728  156414 pod_ready.go:81] duration metric: took 5.989066ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107738  156414 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:39.113796  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:41.115401  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:43.614461  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:45.614900  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:48.113738  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:50.114364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:52.114790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:54.613472  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:56.613889  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:59.114206  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:01.614516  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:04.114859  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:06.114993  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:08.615213  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:11.114015  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:13.114342  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:15.614423  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:18.114442  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:20.614465  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:23.117909  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:25.614155  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:28.114746  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:30.613689  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:32.614790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:35.113716  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:37.116362  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:39.614545  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:42.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:44.114654  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:46.114765  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:48.615523  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:51.114096  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:53.114180  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:55.614278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:58.114138  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:00.613679  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:02.614878  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:05.117278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:07.614025  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:09.614681  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:12.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:14.114458  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:16.614364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:19.114533  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:21.613756  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:24.114325  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:26.614276  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:29.114137  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:31.114274  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:33.115749  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:35.614067  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:37.614374  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:39.615618  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:42.114139  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:44.114503  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:46.114624  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:48.613926  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:50.614527  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:53.115129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:55.613563  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:57.615164  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:59.616129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:02.114384  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:04.114621  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:06.114864  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:08.115242  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:10.613949  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:13.115359  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:15.614560  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:17.615109  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:20.114341  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:22.115253  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:24.119792  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:26.614361  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:29.113806  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:31.114150  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:33.614207  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:35.616204  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:38.113264  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:40.615054  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:42.615127  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:45.115119  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:47.613589  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:49.613803  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:51.615235  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:54.113908  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:56.614614  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:59.114193  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:01.614642  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:04.114186  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:06.614156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:08.614216  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:10.615368  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:13.116263  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:15.613987  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:17.614183  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:19.617124  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:22.114156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:24.613643  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:26.613720  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:28.616174  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:31.114289  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:33.114818  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:35.614735  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:37.107998  156414 pod_ready.go:81] duration metric: took 4m0.000241864s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	E0729 19:16:37.108045  156414 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:16:37.108068  156414 pod_ready.go:38] duration metric: took 4m9.540493845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:16:37.108105  156414 kubeadm.go:597] duration metric: took 4m19.465427343s to restartPrimaryControlPlane
	W0729 19:16:37.108167  156414 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:16:37.108196  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:17:08.548650  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.44042578s)
	I0729 19:17:08.548730  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:17:08.564620  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:17:08.575061  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:17:08.585537  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:17:08.585566  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:17:08.585610  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:17:08.594641  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:17:08.594702  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:17:08.604434  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:17:08.613126  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:17:08.613177  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:17:08.622123  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:17:08.630620  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:17:08.630661  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:17:08.640140  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:17:08.648712  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:17:08.648768  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:17:08.658010  156414 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:17:08.709849  156414 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:17:08.709998  156414 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:17:08.850515  156414 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:17:08.850632  156414 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:17:08.850769  156414 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:17:09.057782  156414 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:17:09.059421  156414 out.go:204]   - Generating certificates and keys ...
	I0729 19:17:09.059494  156414 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:17:09.059566  156414 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:17:09.059636  156414 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:17:09.062277  156414 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:17:09.062401  156414 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:17:09.062475  156414 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:17:09.062526  156414 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:17:09.062616  156414 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:17:09.062695  156414 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:17:09.062807  156414 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:17:09.062863  156414 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:17:09.062933  156414 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:17:09.426782  156414 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:17:09.599745  156414 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:17:09.741530  156414 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:17:09.907315  156414 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:17:10.118045  156414 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:17:10.118623  156414 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:17:10.121594  156414 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:17:10.124052  156414 out.go:204]   - Booting up control plane ...
	I0729 19:17:10.124173  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:17:10.124267  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:17:10.124374  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:17:10.144903  156414 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:17:10.145010  156414 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:17:10.145047  156414 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:17:10.278905  156414 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:17:10.279025  156414 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:17:11.280964  156414 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002120381s
	I0729 19:17:11.281070  156414 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:17:15.782460  156414 kubeadm.go:310] [api-check] The API server is healthy after 4.501562605s
	I0729 19:17:15.804614  156414 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:17:15.822230  156414 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:17:15.849613  156414 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:17:15.849870  156414 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-368536 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:17:15.861910  156414 kubeadm.go:310] [bootstrap-token] Using token: zhramo.fqhnhxuylehyq043
	I0729 19:17:15.863215  156414 out.go:204]   - Configuring RBAC rules ...
	I0729 19:17:15.863352  156414 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:17:15.870893  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:17:15.886779  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:17:15.889933  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:17:15.893111  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:17:15.895970  156414 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:17:16.200928  156414 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:17:16.625621  156414 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:17:17.195772  156414 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:17:17.197712  156414 kubeadm.go:310] 
	I0729 19:17:17.197780  156414 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:17:17.197791  156414 kubeadm.go:310] 
	I0729 19:17:17.197874  156414 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:17:17.197885  156414 kubeadm.go:310] 
	I0729 19:17:17.197925  156414 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:17:17.198023  156414 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:17:17.198108  156414 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:17:17.198120  156414 kubeadm.go:310] 
	I0729 19:17:17.198190  156414 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:17:17.198200  156414 kubeadm.go:310] 
	I0729 19:17:17.198258  156414 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:17:17.198267  156414 kubeadm.go:310] 
	I0729 19:17:17.198347  156414 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:17:17.198451  156414 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:17:17.198529  156414 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:17:17.198539  156414 kubeadm.go:310] 
	I0729 19:17:17.198633  156414 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:17:17.198750  156414 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:17:17.198761  156414 kubeadm.go:310] 
	I0729 19:17:17.198895  156414 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zhramo.fqhnhxuylehyq043 \
	I0729 19:17:17.199041  156414 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f \
	I0729 19:17:17.199074  156414 kubeadm.go:310] 	--control-plane 
	I0729 19:17:17.199081  156414 kubeadm.go:310] 
	I0729 19:17:17.199199  156414 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:17:17.199210  156414 kubeadm.go:310] 
	I0729 19:17:17.199327  156414 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zhramo.fqhnhxuylehyq043 \
	I0729 19:17:17.199478  156414 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f 
	I0729 19:17:17.200591  156414 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:17:17.200629  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:17:17.200642  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:17:17.202541  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:17:17.203847  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:17:17.214711  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:17:17.233233  156414 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:17:17.233330  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:17.233332  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-368536 minikube.k8s.io/updated_at=2024_07_29T19_17_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=embed-certs-368536 minikube.k8s.io/primary=true
	I0729 19:17:17.265931  156414 ops.go:34] apiserver oom_adj: -16
	I0729 19:17:17.410594  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:17.911585  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:18.410650  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:18.911432  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:19.411062  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:19.911629  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:20.411050  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:20.911004  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:21.411031  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:21.910787  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:22.411228  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:22.911181  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:23.410624  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:23.910844  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:24.411409  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:24.910745  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:25.410675  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:25.910901  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:26.411562  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:26.911505  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:27.411552  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:27.910916  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:28.410868  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:28.911466  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.410633  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.911613  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.992725  156414 kubeadm.go:1113] duration metric: took 12.75946311s to wait for elevateKubeSystemPrivileges
	I0729 19:17:29.992767  156414 kubeadm.go:394] duration metric: took 5m12.400472687s to StartCluster
	I0729 19:17:29.992793  156414 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:17:29.992902  156414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:17:29.994489  156414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:17:29.994792  156414 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:17:29.994828  156414 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:17:29.994917  156414 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-368536"
	I0729 19:17:29.994954  156414 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-368536"
	I0729 19:17:29.994957  156414 addons.go:69] Setting default-storageclass=true in profile "embed-certs-368536"
	W0729 19:17:29.994966  156414 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:17:29.994969  156414 addons.go:69] Setting metrics-server=true in profile "embed-certs-368536"
	I0729 19:17:29.995004  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:29.995003  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:17:29.995028  156414 addons.go:234] Setting addon metrics-server=true in "embed-certs-368536"
	W0729 19:17:29.995041  156414 addons.go:243] addon metrics-server should already be in state true
	I0729 19:17:29.994986  156414 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-368536"
	I0729 19:17:29.995073  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:29.995409  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995457  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.995460  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995487  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.995487  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995636  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.997279  156414 out.go:177] * Verifying Kubernetes components...
	I0729 19:17:29.998614  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:17:30.011510  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I0729 19:17:30.011717  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0729 19:17:30.011970  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.012063  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.012480  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.012505  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.012626  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.012651  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.012967  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.013105  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.013284  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.013527  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.013574  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.014086  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0729 19:17:30.014502  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.015001  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.015018  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.015505  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.016720  156414 addons.go:234] Setting addon default-storageclass=true in "embed-certs-368536"
	W0729 19:17:30.016740  156414 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:17:30.016770  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:30.017091  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.017123  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.017432  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.017477  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.034798  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0729 19:17:30.035372  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.036179  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.036207  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.037055  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37901
	I0729 19:17:30.037161  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0729 19:17:30.036581  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.037493  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.037581  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.037636  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.038047  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.038056  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.038073  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.038217  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.038403  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.038623  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.038627  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.039185  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.039221  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.040574  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.040687  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.042879  156414 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:17:30.042873  156414 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:17:30.044279  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:17:30.044298  156414 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:17:30.044324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.044544  156414 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:17:30.044593  156414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:17:30.044621  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.048075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048402  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048442  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.048462  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048613  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.048761  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.048845  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.048890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.048914  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.049132  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.049289  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.049306  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.049441  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.049593  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.055718  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34081
	I0729 19:17:30.056086  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.056521  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.056546  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.056931  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.057098  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.058559  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.058795  156414 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:17:30.058810  156414 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:17:30.058825  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.061253  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.061842  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.061880  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.061900  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.062053  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.062195  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.062346  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.192595  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:17:30.208960  156414 node_ready.go:35] waiting up to 6m0s for node "embed-certs-368536" to be "Ready" ...
	I0729 19:17:30.216230  156414 node_ready.go:49] node "embed-certs-368536" has status "Ready":"True"
	I0729 19:17:30.216247  156414 node_ready.go:38] duration metric: took 7.255724ms for node "embed-certs-368536" to be "Ready" ...
	I0729 19:17:30.216256  156414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:17:30.219988  156414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.224074  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.224099  156414 pod_ready.go:81] duration metric: took 4.088257ms for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.224109  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.228389  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.228409  156414 pod_ready.go:81] duration metric: took 4.292723ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.228417  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.233616  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.233634  156414 pod_ready.go:81] duration metric: took 5.212376ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.233642  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.242933  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.242951  156414 pod_ready.go:81] duration metric: took 9.302507ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.242959  156414 pod_ready.go:38] duration metric: took 26.692394ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:17:30.242973  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:17:30.243016  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:17:30.261484  156414 api_server.go:72] duration metric: took 266.652937ms to wait for apiserver process to appear ...
	I0729 19:17:30.261513  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:17:30.261534  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:17:30.269760  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:17:30.270848  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:17:30.270872  156414 api_server.go:131] duration metric: took 9.352433ms to wait for apiserver health ...
	I0729 19:17:30.270880  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:17:30.312744  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:17:30.317547  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:17:30.317570  156414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:17:30.332468  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:17:30.352498  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:17:30.352531  156414 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:17:30.392028  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:17:30.392055  156414 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:17:30.413559  156414 system_pods.go:59] 4 kube-system pods found
	I0729 19:17:30.413586  156414 system_pods.go:61] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:30.413591  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:30.413595  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:30.413598  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:30.413603  156414 system_pods.go:74] duration metric: took 142.71846ms to wait for pod list to return data ...
	I0729 19:17:30.413610  156414 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:17:30.424371  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:17:30.615212  156414 default_sa.go:45] found service account: "default"
	I0729 19:17:30.615237  156414 default_sa.go:55] duration metric: took 201.621467ms for default service account to be created ...
	I0729 19:17:30.615246  156414 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:17:30.831144  156414 system_pods.go:86] 4 kube-system pods found
	I0729 19:17:30.831175  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:30.831182  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:30.831186  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:30.831190  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:30.831210  156414 retry.go:31] will retry after 301.650623ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.127532  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127599  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127595  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127620  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127910  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.127925  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.127935  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127943  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.127974  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.127985  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.127999  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.128008  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.128212  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.128221  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.128440  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.128455  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.128467  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.155504  156414 system_pods.go:86] 8 kube-system pods found
	I0729 19:17:31.155543  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.155559  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.155565  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.155570  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.155575  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.155580  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:17:31.155586  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.155590  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending
	I0729 19:17:31.155606  156414 retry.go:31] will retry after 310.574298ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.159525  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.159546  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.160952  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.160961  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.160976  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.346360  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.346390  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.346700  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.346718  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.346732  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.346742  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.347006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.347052  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.347059  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.347075  156414 addons.go:475] Verifying addon metrics-server=true in "embed-certs-368536"
	I0729 19:17:31.348884  156414 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:17:31.350473  156414 addons.go:510] duration metric: took 1.355642198s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:17:31.473514  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:31.473553  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.473561  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.473567  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.473573  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.473578  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.473583  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:17:31.473587  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.473596  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:31.473605  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:31.473622  156414 retry.go:31] will retry after 446.790872ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.928348  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:31.928381  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.928389  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.928396  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.928401  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.928406  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.928409  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:31.928413  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.928420  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:31.928429  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:31.928444  156414 retry.go:31] will retry after 467.830899ms: missing components: kube-dns
	I0729 19:17:32.403619  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:32.403649  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:32.403659  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:32.403665  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:32.403670  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:32.403676  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:32.403683  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:32.403689  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:32.403697  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:32.403706  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:32.403729  156414 retry.go:31] will retry after 745.010861ms: missing components: kube-dns
	I0729 19:17:33.163660  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:33.163697  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:33.163710  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:33.163719  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:33.163733  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:33.163740  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:33.163746  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:33.163751  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:33.163761  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:33.163770  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Running
	I0729 19:17:33.163791  156414 retry.go:31] will retry after 658.944312ms: missing components: kube-dns
	I0729 19:17:33.830608  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:33.830643  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Running
	I0729 19:17:33.830650  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Running
	I0729 19:17:33.830656  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:33.830662  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:33.830670  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:33.830675  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:33.830682  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:33.830692  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:33.830703  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Running
	I0729 19:17:33.830714  156414 system_pods.go:126] duration metric: took 3.215460876s to wait for k8s-apps to be running ...
	I0729 19:17:33.830726  156414 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:17:33.830824  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:17:33.847810  156414 system_svc.go:56] duration metric: took 17.074145ms WaitForService to wait for kubelet
	I0729 19:17:33.847837  156414 kubeadm.go:582] duration metric: took 3.853011216s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:17:33.847861  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:17:33.850180  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:17:33.850198  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:17:33.850209  156414 node_conditions.go:105] duration metric: took 2.342951ms to run NodePressure ...
	I0729 19:17:33.850221  156414 start.go:241] waiting for startup goroutines ...
	I0729 19:17:33.850230  156414 start.go:246] waiting for cluster config update ...
	I0729 19:17:33.850242  156414 start.go:255] writing updated cluster config ...
	I0729 19:17:33.850512  156414 ssh_runner.go:195] Run: rm -f paused
	I0729 19:17:33.898396  156414 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:17:33.899771  156414 out.go:177] * Done! kubectl is now configured to use "embed-certs-368536" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.060614411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281195060593802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a56aa5e-9cd0-4d78-acf7-1bbe94e1de6f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.061323416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=153fbd43-74da-4d72-ba91-ba3a8a64f53c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.061375468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=153fbd43-74da-4d72-ba91-ba3a8a64f53c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.061556389Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6538892eefc87c3409d41c364973087f89d912202bbd19204083f1d856b3881,PodSandboxId:82d2c8bcaec4e003aa086b4c64d6027ca46d08f00fe8d952e5f0f45986f4dd74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653166131866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ds92x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81db7fca-a759-47fc-bea8-697adcb09763,},Annotations:map[string]string{io.kubernetes.container.hash: 858a3191,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a29c89e8d7b18ca5f1d479153563b6e102fa2b7d64a3b0e573196f7cbaf989,PodSandboxId:3238a9e254c813a323ea1741d89ac974dbfb3754995f117e263043f91c2c4c49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653109115416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrvx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b3edf97c-8085-4d56-a12d-a6daa3782004,},Annotations:map[string]string{io.kubernetes.container.hash: 25bf3c5d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d51ed1bbac5c274a39024d0bad3a9452d6441b7ecdcb63f8092a7e50ed3210,PodSandboxId:e61c25a79797527e318e288f4c195021b23e9469d2bc41b628b00dff8d54c1be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722280651644152691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d64a91-164a-45b2-9a82-d826e64e6cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 69c8f71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd8c28c3f1568d0a5b315626538f4a9d96c454a2a0bfd77cdc1311e922494f,PodSandboxId:7620b4ba07280df59d1dc9961b559d33b14970d0ea1c054e94590c9dcb5cf54e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722280651414223075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxqlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b66638b-5fb8-4bae-a129-8a1fb54389f4,},Annotations:map[string]string{io.kubernetes.container.hash: 92b80f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcda9b2e1dc9744276cda6fee20af425aa842a19ad8760203adac5ffa740008,PodSandboxId:806d4f9abd61a74145d3ce9d948e41a786cdd31f1a7e08d3562a703892e9e273,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280631593598504,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bac9cb10bd39355f93f79a1906e9e97,},Annotations:map[string]string{io.kubernetes.container.hash: 7015d511,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600489eb286eac55c593cb64fe454ef94937857bc7be1de102a6f56a76604b2b,PodSandboxId:391c4528b164d0ee88276bef3140492d234a9621de7d7ba56c9d02b4169e0e0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280631563623279,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91b7a33a8ce7c8a88aef4dd4c6e195e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1841eea64962819c42c74aa05055e43f5f2eb90205b000ed03460683caf85065,PodSandboxId:a9f6a2ce03c7314db1e75fafc80a5f967443568f5db00a812415419e9927922b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280631536656428,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868148f3454fd7df7aeaf1387b63fd517d97e1cc6f64b01c1d3e9a82c0e876,PodSandboxId:359254c3e62383d1d2f61b9cd235a08509cdecf5db99244f91c6541e3f8b64f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280631527294058,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71fb0d9dc40452f3a849de813c6e179,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60b3d1fca4830e462e9f2f4362caaabfc8186d78ecddbe6108ffba50a30a9a5,PodSandboxId:ffdffecccfc3f8b332033dbb5baecfdcb19a3e62c604ebc8256464baf118188b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722280340337352693,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=153fbd43-74da-4d72-ba91-ba3a8a64f53c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.101678351Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc0783a7-a4d8-4a67-9dbe-bc1253a8aa71 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.101754103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc0783a7-a4d8-4a67-9dbe-bc1253a8aa71 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.103090869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13a0ec99-8342-491c-96ae-e234b222a5c0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.103531021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281195103508360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13a0ec99-8342-491c-96ae-e234b222a5c0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.104092531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e40e9b07-25f5-4ac6-b60e-1265b2ea881a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.104143218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e40e9b07-25f5-4ac6-b60e-1265b2ea881a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.104347619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6538892eefc87c3409d41c364973087f89d912202bbd19204083f1d856b3881,PodSandboxId:82d2c8bcaec4e003aa086b4c64d6027ca46d08f00fe8d952e5f0f45986f4dd74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653166131866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ds92x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81db7fca-a759-47fc-bea8-697adcb09763,},Annotations:map[string]string{io.kubernetes.container.hash: 858a3191,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a29c89e8d7b18ca5f1d479153563b6e102fa2b7d64a3b0e573196f7cbaf989,PodSandboxId:3238a9e254c813a323ea1741d89ac974dbfb3754995f117e263043f91c2c4c49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653109115416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrvx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b3edf97c-8085-4d56-a12d-a6daa3782004,},Annotations:map[string]string{io.kubernetes.container.hash: 25bf3c5d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d51ed1bbac5c274a39024d0bad3a9452d6441b7ecdcb63f8092a7e50ed3210,PodSandboxId:e61c25a79797527e318e288f4c195021b23e9469d2bc41b628b00dff8d54c1be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722280651644152691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d64a91-164a-45b2-9a82-d826e64e6cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 69c8f71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd8c28c3f1568d0a5b315626538f4a9d96c454a2a0bfd77cdc1311e922494f,PodSandboxId:7620b4ba07280df59d1dc9961b559d33b14970d0ea1c054e94590c9dcb5cf54e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722280651414223075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxqlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b66638b-5fb8-4bae-a129-8a1fb54389f4,},Annotations:map[string]string{io.kubernetes.container.hash: 92b80f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcda9b2e1dc9744276cda6fee20af425aa842a19ad8760203adac5ffa740008,PodSandboxId:806d4f9abd61a74145d3ce9d948e41a786cdd31f1a7e08d3562a703892e9e273,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280631593598504,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bac9cb10bd39355f93f79a1906e9e97,},Annotations:map[string]string{io.kubernetes.container.hash: 7015d511,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600489eb286eac55c593cb64fe454ef94937857bc7be1de102a6f56a76604b2b,PodSandboxId:391c4528b164d0ee88276bef3140492d234a9621de7d7ba56c9d02b4169e0e0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280631563623279,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91b7a33a8ce7c8a88aef4dd4c6e195e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1841eea64962819c42c74aa05055e43f5f2eb90205b000ed03460683caf85065,PodSandboxId:a9f6a2ce03c7314db1e75fafc80a5f967443568f5db00a812415419e9927922b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280631536656428,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868148f3454fd7df7aeaf1387b63fd517d97e1cc6f64b01c1d3e9a82c0e876,PodSandboxId:359254c3e62383d1d2f61b9cd235a08509cdecf5db99244f91c6541e3f8b64f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280631527294058,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71fb0d9dc40452f3a849de813c6e179,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60b3d1fca4830e462e9f2f4362caaabfc8186d78ecddbe6108ffba50a30a9a5,PodSandboxId:ffdffecccfc3f8b332033dbb5baecfdcb19a3e62c604ebc8256464baf118188b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722280340337352693,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e40e9b07-25f5-4ac6-b60e-1265b2ea881a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.149206363Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b05ebbc-10ec-4155-964a-121738cd2ec2 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.149278374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b05ebbc-10ec-4155-964a-121738cd2ec2 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.150610631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf68137c-7800-474c-8bd1-1d1d80f2273a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.151102750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281195151080584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf68137c-7800-474c-8bd1-1d1d80f2273a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.152119996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5eaf1b2b-92fb-4111-b698-dfafb8735873 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.152172378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5eaf1b2b-92fb-4111-b698-dfafb8735873 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.152351469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6538892eefc87c3409d41c364973087f89d912202bbd19204083f1d856b3881,PodSandboxId:82d2c8bcaec4e003aa086b4c64d6027ca46d08f00fe8d952e5f0f45986f4dd74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653166131866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ds92x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81db7fca-a759-47fc-bea8-697adcb09763,},Annotations:map[string]string{io.kubernetes.container.hash: 858a3191,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a29c89e8d7b18ca5f1d479153563b6e102fa2b7d64a3b0e573196f7cbaf989,PodSandboxId:3238a9e254c813a323ea1741d89ac974dbfb3754995f117e263043f91c2c4c49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653109115416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrvx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b3edf97c-8085-4d56-a12d-a6daa3782004,},Annotations:map[string]string{io.kubernetes.container.hash: 25bf3c5d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d51ed1bbac5c274a39024d0bad3a9452d6441b7ecdcb63f8092a7e50ed3210,PodSandboxId:e61c25a79797527e318e288f4c195021b23e9469d2bc41b628b00dff8d54c1be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722280651644152691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d64a91-164a-45b2-9a82-d826e64e6cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 69c8f71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd8c28c3f1568d0a5b315626538f4a9d96c454a2a0bfd77cdc1311e922494f,PodSandboxId:7620b4ba07280df59d1dc9961b559d33b14970d0ea1c054e94590c9dcb5cf54e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722280651414223075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxqlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b66638b-5fb8-4bae-a129-8a1fb54389f4,},Annotations:map[string]string{io.kubernetes.container.hash: 92b80f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcda9b2e1dc9744276cda6fee20af425aa842a19ad8760203adac5ffa740008,PodSandboxId:806d4f9abd61a74145d3ce9d948e41a786cdd31f1a7e08d3562a703892e9e273,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280631593598504,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bac9cb10bd39355f93f79a1906e9e97,},Annotations:map[string]string{io.kubernetes.container.hash: 7015d511,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600489eb286eac55c593cb64fe454ef94937857bc7be1de102a6f56a76604b2b,PodSandboxId:391c4528b164d0ee88276bef3140492d234a9621de7d7ba56c9d02b4169e0e0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280631563623279,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91b7a33a8ce7c8a88aef4dd4c6e195e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1841eea64962819c42c74aa05055e43f5f2eb90205b000ed03460683caf85065,PodSandboxId:a9f6a2ce03c7314db1e75fafc80a5f967443568f5db00a812415419e9927922b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280631536656428,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868148f3454fd7df7aeaf1387b63fd517d97e1cc6f64b01c1d3e9a82c0e876,PodSandboxId:359254c3e62383d1d2f61b9cd235a08509cdecf5db99244f91c6541e3f8b64f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280631527294058,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71fb0d9dc40452f3a849de813c6e179,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60b3d1fca4830e462e9f2f4362caaabfc8186d78ecddbe6108ffba50a30a9a5,PodSandboxId:ffdffecccfc3f8b332033dbb5baecfdcb19a3e62c604ebc8256464baf118188b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722280340337352693,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5eaf1b2b-92fb-4111-b698-dfafb8735873 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.187347763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e49b151-fa31-4f49-9fe1-98a9f1307493 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.187428112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e49b151-fa31-4f49-9fe1-98a9f1307493 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.188615265Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce9f34c8-76a3-4e5d-a452-f4bd5c65c40b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.189119974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281195189098268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce9f34c8-76a3-4e5d-a452-f4bd5c65c40b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.190308070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ab8cb6e-94f5-4820-9103-ce03db19fb9c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.190379402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ab8cb6e-94f5-4820-9103-ce03db19fb9c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:26:35 embed-certs-368536 crio[735]: time="2024-07-29 19:26:35.190603507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6538892eefc87c3409d41c364973087f89d912202bbd19204083f1d856b3881,PodSandboxId:82d2c8bcaec4e003aa086b4c64d6027ca46d08f00fe8d952e5f0f45986f4dd74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653166131866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ds92x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81db7fca-a759-47fc-bea8-697adcb09763,},Annotations:map[string]string{io.kubernetes.container.hash: 858a3191,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a29c89e8d7b18ca5f1d479153563b6e102fa2b7d64a3b0e573196f7cbaf989,PodSandboxId:3238a9e254c813a323ea1741d89ac974dbfb3754995f117e263043f91c2c4c49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653109115416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrvx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b3edf97c-8085-4d56-a12d-a6daa3782004,},Annotations:map[string]string{io.kubernetes.container.hash: 25bf3c5d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d51ed1bbac5c274a39024d0bad3a9452d6441b7ecdcb63f8092a7e50ed3210,PodSandboxId:e61c25a79797527e318e288f4c195021b23e9469d2bc41b628b00dff8d54c1be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722280651644152691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d64a91-164a-45b2-9a82-d826e64e6cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 69c8f71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd8c28c3f1568d0a5b315626538f4a9d96c454a2a0bfd77cdc1311e922494f,PodSandboxId:7620b4ba07280df59d1dc9961b559d33b14970d0ea1c054e94590c9dcb5cf54e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722280651414223075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxqlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b66638b-5fb8-4bae-a129-8a1fb54389f4,},Annotations:map[string]string{io.kubernetes.container.hash: 92b80f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcda9b2e1dc9744276cda6fee20af425aa842a19ad8760203adac5ffa740008,PodSandboxId:806d4f9abd61a74145d3ce9d948e41a786cdd31f1a7e08d3562a703892e9e273,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280631593598504,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bac9cb10bd39355f93f79a1906e9e97,},Annotations:map[string]string{io.kubernetes.container.hash: 7015d511,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600489eb286eac55c593cb64fe454ef94937857bc7be1de102a6f56a76604b2b,PodSandboxId:391c4528b164d0ee88276bef3140492d234a9621de7d7ba56c9d02b4169e0e0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280631563623279,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91b7a33a8ce7c8a88aef4dd4c6e195e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1841eea64962819c42c74aa05055e43f5f2eb90205b000ed03460683caf85065,PodSandboxId:a9f6a2ce03c7314db1e75fafc80a5f967443568f5db00a812415419e9927922b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280631536656428,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868148f3454fd7df7aeaf1387b63fd517d97e1cc6f64b01c1d3e9a82c0e876,PodSandboxId:359254c3e62383d1d2f61b9cd235a08509cdecf5db99244f91c6541e3f8b64f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280631527294058,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71fb0d9dc40452f3a849de813c6e179,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60b3d1fca4830e462e9f2f4362caaabfc8186d78ecddbe6108ffba50a30a9a5,PodSandboxId:ffdffecccfc3f8b332033dbb5baecfdcb19a3e62c604ebc8256464baf118188b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722280340337352693,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ab8cb6e-94f5-4820-9103-ce03db19fb9c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6538892eefc8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   82d2c8bcaec4e       coredns-7db6d8ff4d-ds92x
	66a29c89e8d7b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3238a9e254c81       coredns-7db6d8ff4d-gnrvx
	52d51ed1bbac5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   e61c25a797975       storage-provisioner
	b8bd8c28c3f15       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   7620b4ba07280       kube-proxy-rxqlm
	6dcda9b2e1dc9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   806d4f9abd61a       etcd-embed-certs-368536
	600489eb286ea       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   391c4528b164d       kube-scheduler-embed-certs-368536
	1841eea649628       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   a9f6a2ce03c73       kube-apiserver-embed-certs-368536
	5d868148f3454       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   359254c3e6238       kube-controller-manager-embed-certs-368536
	f60b3d1fca483       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   14 minutes ago      Exited              kube-apiserver            1                   ffdffecccfc3f       kube-apiserver-embed-certs-368536
	
	
	==> coredns [66a29c89e8d7b18ca5f1d479153563b6e102fa2b7d64a3b0e573196f7cbaf989] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e6538892eefc87c3409d41c364973087f89d912202bbd19204083f1d856b3881] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-368536
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-368536
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=embed-certs-368536
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_17_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:17:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-368536
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:26:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:22:44 +0000   Mon, 29 Jul 2024 19:17:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:22:44 +0000   Mon, 29 Jul 2024 19:17:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:22:44 +0000   Mon, 29 Jul 2024 19:17:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:22:44 +0000   Mon, 29 Jul 2024 19:17:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.95
	  Hostname:    embed-certs-368536
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8eee350f9b324193b8de34dbb432d91e
	  System UUID:                8eee350f-9b32-4193-b8de-34dbb432d91e
	  Boot ID:                    d0bedae2-8e93-4de8-9199-f4e1e7af4ab9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-ds92x                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 coredns-7db6d8ff4d-gnrvx                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-embed-certs-368536                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-embed-certs-368536             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-368536    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-rxqlm                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-embed-certs-368536             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-9z4tp               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node embed-certs-368536 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node embed-certs-368536 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node embed-certs-368536 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m5s   node-controller  Node embed-certs-368536 event: Registered Node embed-certs-368536 in Controller
	
	
	==> dmesg <==
	[  +0.040146] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.780007] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul29 19:12] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.490513] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.347328] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.066076] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075137] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.175373] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.146386] systemd-fstab-generator[689]: Ignoring "noauto" option for root device
	[  +0.286031] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +4.297208] systemd-fstab-generator[818]: Ignoring "noauto" option for root device
	[  +0.058855] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.365142] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +5.597558] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.612073] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.026927] kauditd_printk_skb: 27 callbacks suppressed
	[Jul29 19:17] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.662898] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +4.754849] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.332354] systemd-fstab-generator[3894]: Ignoring "noauto" option for root device
	[ +13.799696] systemd-fstab-generator[4109]: Ignoring "noauto" option for root device
	[  +0.101879] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 19:18] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6dcda9b2e1dc9744276cda6fee20af425aa842a19ad8760203adac5ffa740008] <==
	{"level":"info","ts":"2024-07-29T19:17:11.911658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 switched to configuration voters=(10728274991811207496)"}
	{"level":"info","ts":"2024-07-29T19:17:11.91178Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"78c5ccfc677e9ba5","local-member-id":"94e27d43a39d2148","added-peer-id":"94e27d43a39d2148","added-peer-peer-urls":["https://192.168.50.95:2380"]}
	{"level":"info","ts":"2024-07-29T19:17:11.942029Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T19:17:11.942339Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"94e27d43a39d2148","initial-advertise-peer-urls":["https://192.168.50.95:2380"],"listen-peer-urls":["https://192.168.50.95:2380"],"advertise-client-urls":["https://192.168.50.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T19:17:11.942399Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T19:17:11.942535Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.95:2380"}
	{"level":"info","ts":"2024-07-29T19:17:11.942561Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.95:2380"}
	{"level":"info","ts":"2024-07-29T19:17:12.772817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T19:17:12.773001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T19:17:12.773077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 received MsgPreVoteResp from 94e27d43a39d2148 at term 1"}
	{"level":"info","ts":"2024-07-29T19:17:12.773173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:17:12.773198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 received MsgVoteResp from 94e27d43a39d2148 at term 2"}
	{"level":"info","ts":"2024-07-29T19:17:12.773278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T19:17:12.773308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 94e27d43a39d2148 elected leader 94e27d43a39d2148 at term 2"}
	{"level":"info","ts":"2024-07-29T19:17:12.775169Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:17:12.776732Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"94e27d43a39d2148","local-member-attributes":"{Name:embed-certs-368536 ClientURLs:[https://192.168.50.95:2379]}","request-path":"/0/members/94e27d43a39d2148/attributes","cluster-id":"78c5ccfc677e9ba5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:17:12.777273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:17:12.777295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:17:12.777771Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:17:12.777818Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:17:12.777925Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"78c5ccfc677e9ba5","local-member-id":"94e27d43a39d2148","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:17:12.778019Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:17:12.778065Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:17:12.779938Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.95:2379"}
	{"level":"info","ts":"2024-07-29T19:17:12.782415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:26:35 up 14 min,  0 users,  load average: 0.12, 0.13, 0.09
	Linux embed-certs-368536 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1841eea64962819c42c74aa05055e43f5f2eb90205b000ed03460683caf85065] <==
	I0729 19:20:32.087204       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:22:14.128511       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:22:14.128648       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 19:22:15.129346       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:22:15.129483       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:22:15.129516       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:22:15.129392       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:22:15.129640       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:22:15.130904       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:23:15.130014       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:23:15.130087       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:23:15.130096       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:23:15.131169       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:23:15.131264       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:23:15.131291       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:25:15.130811       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:25:15.130963       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:25:15.130976       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:25:15.132025       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:25:15.132169       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:25:15.132199       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f60b3d1fca4830e462e9f2f4362caaabfc8186d78ecddbe6108ffba50a30a9a5] <==
	W0729 19:17:06.718210       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.806619       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.878550       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.902632       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.915684       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.941071       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.948599       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.016201       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.083579       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.218685       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.226395       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.267219       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.281542       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.380464       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.398657       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.419317       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.462260       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.534002       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.701507       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.739537       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.741960       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.945434       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:08.071350       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:08.074748       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:08.161146       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5d868148f3454fd7df7aeaf1387b63fd517d97e1cc6f64b01c1d3e9a82c0e876] <==
	I0729 19:21:00.590499       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:21:30.145275       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:21:30.597787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:22:00.150059       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:22:00.607333       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:22:30.156530       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:22:30.615488       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:23:00.161698       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:23:00.622840       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:23:19.550167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="215.741µs"
	E0729 19:23:30.167088       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:23:30.630303       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:23:34.551115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="113.581µs"
	E0729 19:24:00.172083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:24:00.639586       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:24:30.179318       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:24:30.648439       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:25:00.184018       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:25:00.657506       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:25:30.189108       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:25:30.664805       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:26:00.194639       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:26:00.673107       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:26:30.199961       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:26:30.681752       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b8bd8c28c3f1568d0a5b315626538f4a9d96c454a2a0bfd77cdc1311e922494f] <==
	I0729 19:17:31.724194       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:17:31.747326       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.95"]
	I0729 19:17:31.884657       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:17:31.884715       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:17:31.884731       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:17:31.888261       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:17:31.888587       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:17:31.888618       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:17:31.890053       1 config.go:192] "Starting service config controller"
	I0729 19:17:31.890126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:17:31.890162       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:17:31.890165       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:17:31.891059       1 config.go:319] "Starting node config controller"
	I0729 19:17:31.891088       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:17:31.990853       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:17:31.990903       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:17:31.991229       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [600489eb286eac55c593cb64fe454ef94937857bc7be1de102a6f56a76604b2b] <==
	W0729 19:17:14.153457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:17:14.153531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:17:14.153734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 19:17:14.153793       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:17:14.153828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 19:17:14.153846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:17:14.153853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:17:14.153806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 19:17:15.080411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:17:15.080462       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 19:17:15.113920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 19:17:15.113994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 19:17:15.131801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:17:15.131908       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 19:17:15.154984       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:17:15.155031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 19:17:15.193952       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:17:15.193995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 19:17:15.251209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:17:15.251256       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 19:17:15.265404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:17:15.265431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 19:17:15.369207       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:17:15.369340       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 19:17:18.639851       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:24:16 embed-certs-368536 kubelet[3901]: E0729 19:24:16.549727    3901 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:24:16 embed-certs-368536 kubelet[3901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:24:16 embed-certs-368536 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:24:16 embed-certs-368536 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:24:16 embed-certs-368536 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:24:23 embed-certs-368536 kubelet[3901]: E0729 19:24:23.537631    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:24:37 embed-certs-368536 kubelet[3901]: E0729 19:24:37.537076    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:24:50 embed-certs-368536 kubelet[3901]: E0729 19:24:50.537062    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:25:01 embed-certs-368536 kubelet[3901]: E0729 19:25:01.536919    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:25:13 embed-certs-368536 kubelet[3901]: E0729 19:25:13.536474    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:25:16 embed-certs-368536 kubelet[3901]: E0729 19:25:16.550914    3901 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:25:16 embed-certs-368536 kubelet[3901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:25:16 embed-certs-368536 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:25:16 embed-certs-368536 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:25:16 embed-certs-368536 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:25:27 embed-certs-368536 kubelet[3901]: E0729 19:25:27.537157    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:25:41 embed-certs-368536 kubelet[3901]: E0729 19:25:41.536441    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:25:56 embed-certs-368536 kubelet[3901]: E0729 19:25:56.538091    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:26:08 embed-certs-368536 kubelet[3901]: E0729 19:26:08.538085    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:26:16 embed-certs-368536 kubelet[3901]: E0729 19:26:16.550467    3901 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:26:16 embed-certs-368536 kubelet[3901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:26:16 embed-certs-368536 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:26:16 embed-certs-368536 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:26:16 embed-certs-368536 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:26:22 embed-certs-368536 kubelet[3901]: E0729 19:26:22.538771    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	
	
	==> storage-provisioner [52d51ed1bbac5c274a39024d0bad3a9452d6441b7ecdcb63f8092a7e50ed3210] <==
	I0729 19:17:31.770076       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:17:31.791523       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:17:31.791930       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:17:31.810673       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:17:31.811833       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-368536_220d82ce-99df-41e3-9f88-758388c9244a!
	I0729 19:17:31.811615       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acd7b0d5-1a16-4d6d-8e6a-624c5d75b549", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-368536_220d82ce-99df-41e3-9f88-758388c9244a became leader
	I0729 19:17:31.912831       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-368536_220d82ce-99df-41e3-9f88-758388c9244a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-368536 -n embed-certs-368536
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-368536 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-9z4tp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-368536 describe pod metrics-server-569cc877fc-9z4tp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-368536 describe pod metrics-server-569cc877fc-9z4tp: exit status 1 (60.102692ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-9z4tp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-368536 describe pod metrics-server-569cc877fc-9z4tp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (374.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 19:26:39.106831   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 19:26:41.635149   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 19:26:59.907797   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/client.crt: no such file or directory
E0729 19:27:20.014410   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/client.crt: no such file or directory
E0729 19:27:40.400388   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 19:27:43.522267   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.crt: no such file or directory
E0729 19:27:46.273325   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 19:27:47.698371   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/client.crt: no such file or directory
E0729 19:27:56.702917   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 19:28:07.573712   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 19:28:11.206893   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.crt: no such file or directory
E0729 19:28:18.903203   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 19:28:38.590048   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 19:29:53.656592   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 19:30:04.528688   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 19:30:47.656430   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 19:30:53.333818   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 19:31:32.222458   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/no-preload-524369/client.crt: no such file or directory
E0729 19:31:39.106452   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 19:32:20.013735   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/default-k8s-diff-port-612270/client.crt: no such file or directory
E0729 19:32:40.401007   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 19:32:43.522735   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/old-k8s-version-834964/client.crt: no such file or directory
E0729 19:32:46.273365   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-368536 -n embed-certs-368536
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 19:32:49.078192054 +0000 UTC m=+7179.048895830
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-368536 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-368536 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.419µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-368536 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-368536 -n embed-certs-368536
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-368536 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-368536 logs -n 25: (1.194384211s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-612270       | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 19:04 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-834964             | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC | 29 Jul 24 18:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-974855                              | cert-expiration-974855       | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:01 UTC |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:01 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-453780             | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-453780                  | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-453780 --memory=2200 --alsologtostderr   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-453780 image list                           | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p newest-cni-453780                                   | newest-cni-453780            | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	| delete  | -p                                                     | disable-driver-mounts-148539 | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:02 UTC |
	|         | disable-driver-mounts-148539                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:02 UTC | 29 Jul 24 19:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-368536            | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC | 29 Jul 24 19:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-368536                 | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-368536                                  | embed-certs-368536           | jenkins | v1.33.1 | 29 Jul 24 19:07 UTC | 29 Jul 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-834964                              | old-k8s-version-834964       | jenkins | v1.33.1 | 29 Jul 24 19:19 UTC | 29 Jul 24 19:19 UTC |
	| delete  | -p no-preload-524369                                   | no-preload-524369            | jenkins | v1.33.1 | 29 Jul 24 19:20 UTC | 29 Jul 24 19:20 UTC |
	| delete  | -p                                                     | default-k8s-diff-port-612270 | jenkins | v1.33.1 | 29 Jul 24 19:20 UTC | 29 Jul 24 19:20 UTC |
	|         | default-k8s-diff-port-612270                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:07:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:07:08.883432  156414 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:07:08.883769  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883781  156414 out.go:304] Setting ErrFile to fd 2...
	I0729 19:07:08.883788  156414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:07:08.883976  156414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 19:07:08.884534  156414 out.go:298] Setting JSON to false
	I0729 19:07:08.885578  156414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13749,"bootTime":1722266280,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:07:08.885642  156414 start.go:139] virtualization: kvm guest
	I0729 19:07:08.888023  156414 out.go:177] * [embed-certs-368536] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:07:08.889601  156414 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 19:07:08.889601  156414 notify.go:220] Checking for updates...
	I0729 19:07:08.892436  156414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:07:08.893966  156414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:07:08.895257  156414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 19:07:08.896516  156414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:07:08.897746  156414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:07:08.899225  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:07:08.899588  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.899642  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.914943  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0729 19:07:08.915307  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.915885  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.915905  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.916305  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.916486  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.916703  156414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:07:08.917034  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.917074  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:08.931497  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0729 19:07:08.931857  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:08.932280  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:08.932300  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:08.932640  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:08.932819  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:08.968386  156414 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:07:08.969538  156414 start.go:297] selected driver: kvm2
	I0729 19:07:08.969556  156414 start.go:901] validating driver "kvm2" against &{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.969681  156414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:07:08.970358  156414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.970428  156414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:07:08.986808  156414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:07:08.987271  156414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:07:08.987351  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:07:08.987370  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:07:08.987443  156414 start.go:340] cluster config:
	{Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:07:08.987605  156414 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:07:08.989202  156414 out.go:177] * Starting "embed-certs-368536" primary control-plane node in "embed-certs-368536" cluster
	I0729 19:07:08.990452  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:07:08.990496  156414 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:07:08.990510  156414 cache.go:56] Caching tarball of preloaded images
	I0729 19:07:08.990604  156414 preload.go:172] Found /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:07:08.990618  156414 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:07:08.990746  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:07:08.990949  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:07:08.990994  156414 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:07:08.991013  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:07:08.991023  156414 fix.go:54] fixHost starting: 
	I0729 19:07:08.991314  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:07:08.991356  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:07:09.006100  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0729 19:07:09.006507  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:07:09.007034  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:07:09.007060  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:07:09.007401  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:07:09.007594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.007758  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:07:09.009424  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Running err=<nil>
	W0729 19:07:09.009448  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:07:09.011240  156414 out.go:177] * Updating the running kvm2 "embed-certs-368536" VM ...
	I0729 19:07:09.012305  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:07:09.012324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:07:09.012506  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:07:09.014924  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015334  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:07:09.015362  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:07:09.015507  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:07:09.015664  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015796  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:07:09.015962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:07:09.016106  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:07:09.016288  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:07:09.016300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:07:11.913157  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:14.985074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:21.065092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:24.137179  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:30.217186  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:33.289154  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:41.417004  152077 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:07:41.417303  152077 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:07:41.417326  152077 kubeadm.go:310] 
	I0729 19:07:41.417370  152077 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:07:41.417434  152077 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:07:41.417456  152077 kubeadm.go:310] 
	I0729 19:07:41.417514  152077 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:07:41.417586  152077 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:07:41.417720  152077 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:07:41.417732  152077 kubeadm.go:310] 
	I0729 19:07:41.417870  152077 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:07:41.417917  152077 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:07:41.417972  152077 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:07:41.417982  152077 kubeadm.go:310] 
	I0729 19:07:41.418104  152077 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:07:41.418232  152077 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:07:41.418247  152077 kubeadm.go:310] 
	I0729 19:07:41.418370  152077 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:07:41.418477  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:07:41.418596  152077 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:07:41.418696  152077 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:07:41.418706  152077 kubeadm.go:310] 
	I0729 19:07:41.419442  152077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:07:41.419562  152077 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:07:41.419660  152077 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:07:41.419767  152077 kubeadm.go:394] duration metric: took 8m3.273724985s to StartCluster
	I0729 19:07:41.419853  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:07:41.419923  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:07:41.466895  152077 cri.go:89] found id: ""
	I0729 19:07:41.466922  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.466933  152077 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:07:41.466941  152077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:07:41.467013  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:07:41.500836  152077 cri.go:89] found id: ""
	I0729 19:07:41.500876  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.500888  152077 logs.go:278] No container was found matching "etcd"
	I0729 19:07:41.500896  152077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:07:41.500949  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:07:41.534917  152077 cri.go:89] found id: ""
	I0729 19:07:41.534946  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.534958  152077 logs.go:278] No container was found matching "coredns"
	I0729 19:07:41.534968  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:07:41.535038  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:07:41.570516  152077 cri.go:89] found id: ""
	I0729 19:07:41.570545  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.570556  152077 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:07:41.570565  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:07:41.570640  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:07:41.608833  152077 cri.go:89] found id: ""
	I0729 19:07:41.608881  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.608894  152077 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:07:41.608902  152077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:07:41.608969  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:07:41.644079  152077 cri.go:89] found id: ""
	I0729 19:07:41.644114  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.644127  152077 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:07:41.644136  152077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:07:41.644198  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:07:41.678991  152077 cri.go:89] found id: ""
	I0729 19:07:41.679019  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.679028  152077 logs.go:278] No container was found matching "kindnet"
	I0729 19:07:41.679035  152077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:07:41.679088  152077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:07:41.722775  152077 cri.go:89] found id: ""
	I0729 19:07:41.722803  152077 logs.go:276] 0 containers: []
	W0729 19:07:41.722815  152077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:07:41.722829  152077 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:07:41.722857  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:07:41.841614  152077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:07:41.841641  152077 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:07:41.841658  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:07:41.945679  152077 logs.go:123] Gathering logs for container status ...
	I0729 19:07:41.945716  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:07:41.984505  152077 logs.go:123] Gathering logs for kubelet ...
	I0729 19:07:41.984536  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:07:42.037376  152077 logs.go:123] Gathering logs for dmesg ...
	I0729 19:07:42.037418  152077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0729 19:07:42.051259  152077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:07:42.051310  152077 out.go:239] * 
	W0729 19:07:42.051369  152077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.051390  152077 out.go:239] * 
	W0729 19:07:42.052280  152077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:07:42.055255  152077 out.go:177] 
	W0729 19:07:42.056299  152077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:07:42.056362  152077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:07:42.056391  152077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:07:42.057745  152077 out.go:177] 
	I0729 19:07:42.413268  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:45.481071  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:51.561079  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:07:54.633091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:00.713100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:03.785182  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:09.865116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:12.937102  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:19.017094  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:22.093191  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:28.169116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:31.241130  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:37.321134  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:40.393092  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:46.473118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:49.545166  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:55.625086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:08:58.697184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:04.777113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:07.849165  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:13.933065  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:17.001100  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:23.081133  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:26.153086  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:32.233178  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:35.305183  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:41.385184  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:44.461106  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:50.537120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:53.609150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:09:59.689091  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:02.761193  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:08.845074  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:11.917067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:17.993090  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:21.065137  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:27.145098  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:30.217175  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:36.301060  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:39.369118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:45.449082  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:48.521097  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:54.601120  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:10:57.673200  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:03.753116  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:06.825136  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:12.905195  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:15.977118  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:22.057076  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:25.129144  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:31.209150  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:34.281164  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:40.365067  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:43.437113  156414 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.95:22: connect: no route to host
	I0729 19:11:46.437452  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:11:46.437532  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.437865  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:11:46.437902  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:11:46.438117  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:11:46.439690  156414 machine.go:97] duration metric: took 4m37.427371174s to provisionDockerMachine
	I0729 19:11:46.439733  156414 fix.go:56] duration metric: took 4m37.448711854s for fixHost
	I0729 19:11:46.439746  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 4m37.448735558s
	W0729 19:11:46.439780  156414 start.go:714] error starting host: provision: host is not running
	W0729 19:11:46.439926  156414 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:11:46.439941  156414 start.go:729] Will try again in 5 seconds ...
	I0729 19:11:51.440155  156414 start.go:360] acquireMachinesLock for embed-certs-368536: {Name:mkb4fc91615189a18c2505c715f6575ee0da2912 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:11:51.440266  156414 start.go:364] duration metric: took 69.381µs to acquireMachinesLock for "embed-certs-368536"
	I0729 19:11:51.440311  156414 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:11:51.440322  156414 fix.go:54] fixHost starting: 
	I0729 19:11:51.440735  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:11:51.440764  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:11:51.455959  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0729 19:11:51.456443  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:11:51.457053  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:11:51.457078  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:11:51.457475  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:11:51.457721  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:11:51.457885  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:11:51.459531  156414 fix.go:112] recreateIfNeeded on embed-certs-368536: state=Stopped err=<nil>
	I0729 19:11:51.459558  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	W0729 19:11:51.459736  156414 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:11:51.461603  156414 out.go:177] * Restarting existing kvm2 VM for "embed-certs-368536" ...
	I0729 19:11:51.462768  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Start
	I0729 19:11:51.462973  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring networks are active...
	I0729 19:11:51.463679  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network default is active
	I0729 19:11:51.464155  156414 main.go:141] libmachine: (embed-certs-368536) Ensuring network mk-embed-certs-368536 is active
	I0729 19:11:51.464571  156414 main.go:141] libmachine: (embed-certs-368536) Getting domain xml...
	I0729 19:11:51.465359  156414 main.go:141] libmachine: (embed-certs-368536) Creating domain...
	I0729 19:11:51.794918  156414 main.go:141] libmachine: (embed-certs-368536) Waiting to get IP...
	I0729 19:11:51.795679  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:51.796159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:51.796221  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:51.796150  157493 retry.go:31] will retry after 235.729444ms: waiting for machine to come up
	I0729 19:11:52.033554  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.034180  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.034207  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.034103  157493 retry.go:31] will retry after 323.595446ms: waiting for machine to come up
	I0729 19:11:52.359640  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.360204  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.360233  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.360143  157493 retry.go:31] will retry after 341.954873ms: waiting for machine to come up
	I0729 19:11:52.703779  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:52.704350  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:52.704373  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:52.704298  157493 retry.go:31] will retry after 385.738264ms: waiting for machine to come up
	I0729 19:11:53.091976  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.092451  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.092484  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.092397  157493 retry.go:31] will retry after 759.811264ms: waiting for machine to come up
	I0729 19:11:53.853241  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:53.853799  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:53.853826  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:53.853755  157493 retry.go:31] will retry after 761.294244ms: waiting for machine to come up
	I0729 19:11:54.616298  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:54.617009  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:54.617039  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:54.616952  157493 retry.go:31] will retry after 1.056936741s: waiting for machine to come up
	I0729 19:11:55.675047  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:55.675491  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:55.675514  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:55.675442  157493 retry.go:31] will retry after 1.232760679s: waiting for machine to come up
	I0729 19:11:56.909745  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:56.910283  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:56.910309  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:56.910228  157493 retry.go:31] will retry after 1.432617964s: waiting for machine to come up
	I0729 19:11:58.344399  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:11:58.345006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:11:58.345033  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:11:58.344957  157493 retry.go:31] will retry after 1.914060621s: waiting for machine to come up
	I0729 19:12:00.262146  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:00.262707  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:00.262739  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:00.262638  157493 retry.go:31] will retry after 2.77447957s: waiting for machine to come up
	I0729 19:12:03.039059  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:03.039693  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:03.039718  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:03.039650  157493 retry.go:31] will retry after 2.755354142s: waiting for machine to come up
	I0729 19:12:05.797890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:05.798279  156414 main.go:141] libmachine: (embed-certs-368536) DBG | unable to find current IP address of domain embed-certs-368536 in network mk-embed-certs-368536
	I0729 19:12:05.798306  156414 main.go:141] libmachine: (embed-certs-368536) DBG | I0729 19:12:05.798234  157493 retry.go:31] will retry after 4.501451096s: waiting for machine to come up
	I0729 19:12:10.304116  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304586  156414 main.go:141] libmachine: (embed-certs-368536) Found IP for machine: 192.168.50.95
	I0729 19:12:10.304616  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has current primary IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.304626  156414 main.go:141] libmachine: (embed-certs-368536) Reserving static IP address...
	I0729 19:12:10.305096  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.305134  156414 main.go:141] libmachine: (embed-certs-368536) DBG | skip adding static IP to network mk-embed-certs-368536 - found existing host DHCP lease matching {name: "embed-certs-368536", mac: "52:54:00:86:e7:e8", ip: "192.168.50.95"}
	I0729 19:12:10.305151  156414 main.go:141] libmachine: (embed-certs-368536) Reserved static IP address: 192.168.50.95
	I0729 19:12:10.305166  156414 main.go:141] libmachine: (embed-certs-368536) Waiting for SSH to be available...
	I0729 19:12:10.305184  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Getting to WaitForSSH function...
	I0729 19:12:10.307568  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.307936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.307972  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.308079  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH client type: external
	I0729 19:12:10.308110  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Using SSH private key: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa (-rw-------)
	I0729 19:12:10.308140  156414 main.go:141] libmachine: (embed-certs-368536) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:12:10.308157  156414 main.go:141] libmachine: (embed-certs-368536) DBG | About to run SSH command:
	I0729 19:12:10.308170  156414 main.go:141] libmachine: (embed-certs-368536) DBG | exit 0
	I0729 19:12:10.436958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | SSH cmd err, output: <nil>: 
	I0729 19:12:10.437313  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetConfigRaw
	I0729 19:12:10.437962  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.440164  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440520  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.440545  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.440930  156414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/config.json ...
	I0729 19:12:10.441184  156414 machine.go:94] provisionDockerMachine start ...
	I0729 19:12:10.441207  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:10.441430  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.443802  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444155  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.444187  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.444375  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.444541  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444719  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.444897  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.445064  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.445289  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.445300  156414 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:12:10.557371  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:12:10.557405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557675  156414 buildroot.go:166] provisioning hostname "embed-certs-368536"
	I0729 19:12:10.557706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.557905  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.560444  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.560793  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.560819  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.561027  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.561249  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561442  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.561594  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.561783  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.561990  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.562006  156414 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-368536 && echo "embed-certs-368536" | sudo tee /etc/hostname
	I0729 19:12:10.688844  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-368536
	
	I0729 19:12:10.688891  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.691701  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.692102  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.692293  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.692468  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692660  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.692821  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.693024  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:10.693222  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:10.693245  156414 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-368536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-368536/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-368536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:12:10.814346  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:12:10.814376  156414 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19339-88081/.minikube CaCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19339-88081/.minikube}
	I0729 19:12:10.814400  156414 buildroot.go:174] setting up certificates
	I0729 19:12:10.814409  156414 provision.go:84] configureAuth start
	I0729 19:12:10.814417  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetMachineName
	I0729 19:12:10.814706  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:10.817503  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.817859  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.817886  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.818025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.820077  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820443  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.820473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.820617  156414 provision.go:143] copyHostCerts
	I0729 19:12:10.820703  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem, removing ...
	I0729 19:12:10.820713  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem
	I0729 19:12:10.820781  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/ca.pem (1078 bytes)
	I0729 19:12:10.820905  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem, removing ...
	I0729 19:12:10.820914  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem
	I0729 19:12:10.820943  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/cert.pem (1123 bytes)
	I0729 19:12:10.821005  156414 exec_runner.go:144] found /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem, removing ...
	I0729 19:12:10.821013  156414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem
	I0729 19:12:10.821041  156414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19339-88081/.minikube/key.pem (1679 bytes)
	I0729 19:12:10.821115  156414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem org=jenkins.embed-certs-368536 san=[127.0.0.1 192.168.50.95 embed-certs-368536 localhost minikube]
	I0729 19:12:10.867595  156414 provision.go:177] copyRemoteCerts
	I0729 19:12:10.867661  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:12:10.867685  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:10.870501  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.870857  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:10.870881  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:10.871063  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:10.871267  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:10.871428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:10.871595  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:10.956269  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 19:12:10.981554  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:12:11.006055  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:12:11.031709  156414 provision.go:87] duration metric: took 217.286178ms to configureAuth
	I0729 19:12:11.031746  156414 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:12:11.031924  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:12:11.032088  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.034772  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035129  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.035159  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.035405  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.035583  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035737  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.035863  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.036016  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.036203  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.036218  156414 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:12:11.321782  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:12:11.321810  156414 machine.go:97] duration metric: took 880.608535ms to provisionDockerMachine
	I0729 19:12:11.321823  156414 start.go:293] postStartSetup for "embed-certs-368536" (driver="kvm2")
	I0729 19:12:11.321834  156414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:12:11.321850  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.322243  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:12:11.322281  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.325081  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325437  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.325459  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.325612  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.325841  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.326025  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.326177  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.412548  156414 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:12:11.417360  156414 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:12:11.417386  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/addons for local assets ...
	I0729 19:12:11.417466  156414 filesync.go:126] Scanning /home/jenkins/minikube-integration/19339-88081/.minikube/files for local assets ...
	I0729 19:12:11.417538  156414 filesync.go:149] local asset: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem -> 952822.pem in /etc/ssl/certs
	I0729 19:12:11.417632  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:12:11.429176  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:11.457333  156414 start.go:296] duration metric: took 135.490005ms for postStartSetup
	I0729 19:12:11.457401  156414 fix.go:56] duration metric: took 20.017075153s for fixHost
	I0729 19:12:11.457428  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.460243  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460653  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.460702  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.460873  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.461060  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461233  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.461408  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.461586  156414 main.go:141] libmachine: Using SSH client type: native
	I0729 19:12:11.461763  156414 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I0729 19:12:11.461779  156414 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:12:11.573697  156414 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722280331.531987438
	
	I0729 19:12:11.573724  156414 fix.go:216] guest clock: 1722280331.531987438
	I0729 19:12:11.573735  156414 fix.go:229] Guest: 2024-07-29 19:12:11.531987438 +0000 UTC Remote: 2024-07-29 19:12:11.457406225 +0000 UTC m=+302.608153452 (delta=74.581213ms)
	I0729 19:12:11.573758  156414 fix.go:200] guest clock delta is within tolerance: 74.581213ms
	I0729 19:12:11.573763  156414 start.go:83] releasing machines lock for "embed-certs-368536", held for 20.133485011s
	I0729 19:12:11.573782  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.574056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:11.576405  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576760  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.576798  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.576988  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577479  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577644  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:12:11.577737  156414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:12:11.577799  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.577840  156414 ssh_runner.go:195] Run: cat /version.json
	I0729 19:12:11.577869  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:12:11.580473  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580564  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580840  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580890  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.580921  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:11.580936  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:11.581056  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581203  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:12:11.581358  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581369  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:12:11.581554  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581564  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:12:11.581669  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.581732  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:12:11.662254  156414 ssh_runner.go:195] Run: systemctl --version
	I0729 19:12:11.688214  156414 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:12:11.838322  156414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:12:11.844435  156414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:12:11.844521  156414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:12:11.859899  156414 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:12:11.859923  156414 start.go:495] detecting cgroup driver to use...
	I0729 19:12:11.859990  156414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:12:11.876768  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:12:11.890508  156414 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:12:11.890584  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:12:11.904817  156414 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:12:11.919251  156414 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:12:12.053205  156414 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:12:12.205975  156414 docker.go:233] disabling docker service ...
	I0729 19:12:12.206041  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:12:12.222129  156414 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:12:12.235288  156414 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:12:12.386940  156414 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:12:12.503688  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:12:12.518539  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:12:12.538744  156414 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:12:12.538805  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.549615  156414 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:12:12.549683  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.560565  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.571814  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.582852  156414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:12:12.595237  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.607232  156414 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.626183  156414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:12:12.637202  156414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:12:12.647141  156414 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:12:12.647204  156414 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:12:12.661099  156414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:12:12.671936  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:12.795418  156414 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:12:12.934125  156414 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:12:12.934220  156414 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:12:12.939058  156414 start.go:563] Will wait 60s for crictl version
	I0729 19:12:12.939123  156414 ssh_runner.go:195] Run: which crictl
	I0729 19:12:12.943972  156414 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:12:12.990564  156414 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:12:12.990672  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.018852  156414 ssh_runner.go:195] Run: crio --version
	I0729 19:12:13.053593  156414 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:12:13.054890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetIP
	I0729 19:12:13.057601  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.057994  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:12:13.058025  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:12:13.058229  156414 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:12:13.062303  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:13.075815  156414 kubeadm.go:883] updating cluster {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:12:13.075989  156414 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:12:13.076042  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:13.112073  156414 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:12:13.112136  156414 ssh_runner.go:195] Run: which lz4
	I0729 19:12:13.116209  156414 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:12:13.120509  156414 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:12:13.120545  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:12:14.521429  156414 crio.go:462] duration metric: took 1.405252765s to copy over tarball
	I0729 19:12:14.521516  156414 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:12:16.740086  156414 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218529852s)
	I0729 19:12:16.740129  156414 crio.go:469] duration metric: took 2.218665992s to extract the tarball
	I0729 19:12:16.740140  156414 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:12:16.779302  156414 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:12:16.825787  156414 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:12:16.825823  156414 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:12:16.825832  156414 kubeadm.go:934] updating node { 192.168.50.95 8443 v1.30.3 crio true true} ...
	I0729 19:12:16.825972  156414 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-368536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:12:16.826034  156414 ssh_runner.go:195] Run: crio config
	I0729 19:12:16.873701  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:16.873734  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:16.873752  156414 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:12:16.873776  156414 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.95 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-368536 NodeName:embed-certs-368536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:12:16.873923  156414 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-368536"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:12:16.873987  156414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:12:16.884427  156414 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:12:16.884530  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:12:16.895097  156414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 19:12:16.914112  156414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:12:16.931797  156414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 19:12:16.949824  156414 ssh_runner.go:195] Run: grep 192.168.50.95	control-plane.minikube.internal$ /etc/hosts
	I0729 19:12:16.953765  156414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:12:16.967138  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:12:17.088789  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:12:17.108577  156414 certs.go:68] Setting up /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536 for IP: 192.168.50.95
	I0729 19:12:17.108601  156414 certs.go:194] generating shared ca certs ...
	I0729 19:12:17.108623  156414 certs.go:226] acquiring lock for ca certs: {Name:mk20106d58b3f22ea0fc0f4b499fa3fa572a2690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:12:17.108831  156414 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key
	I0729 19:12:17.109076  156414 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key
	I0729 19:12:17.109118  156414 certs.go:256] generating profile certs ...
	I0729 19:12:17.109296  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/client.key
	I0729 19:12:17.109394  156414 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key.77ca8755
	I0729 19:12:17.109448  156414 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key
	I0729 19:12:17.109619  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem (1338 bytes)
	W0729 19:12:17.109651  156414 certs.go:480] ignoring /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282_empty.pem, impossibly tiny 0 bytes
	I0729 19:12:17.109661  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 19:12:17.109688  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/ca.pem (1078 bytes)
	I0729 19:12:17.109722  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:12:17.109756  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/certs/key.pem (1679 bytes)
	I0729 19:12:17.109819  156414 certs.go:484] found cert: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem (1708 bytes)
	I0729 19:12:17.110926  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:12:17.142003  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 19:12:17.176798  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:12:17.204272  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 19:12:17.244558  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:12:17.281935  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:12:17.306456  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:12:17.332576  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/embed-certs-368536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:12:17.357372  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/certs/95282.pem --> /usr/share/ca-certificates/95282.pem (1338 bytes)
	I0729 19:12:17.380363  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/ssl/certs/952822.pem --> /usr/share/ca-certificates/952822.pem (1708 bytes)
	I0729 19:12:17.403737  156414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19339-88081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:12:17.428493  156414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:12:17.445767  156414 ssh_runner.go:195] Run: openssl version
	I0729 19:12:17.451514  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/952822.pem && ln -fs /usr/share/ca-certificates/952822.pem /etc/ssl/certs/952822.pem"
	I0729 19:12:17.463663  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469467  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 17:45 /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.469523  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/952822.pem
	I0729 19:12:17.475754  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/952822.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:12:17.487617  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:12:17.498699  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503205  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.503257  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:12:17.508880  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:12:17.519570  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/95282.pem && ln -fs /usr/share/ca-certificates/95282.pem /etc/ssl/certs/95282.pem"
	I0729 19:12:17.530630  156414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535004  156414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 17:45 /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.535046  156414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95282.pem
	I0729 19:12:17.540647  156414 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/95282.pem /etc/ssl/certs/51391683.0"
	I0729 19:12:17.551719  156414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:12:17.556199  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:12:17.562469  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:12:17.568561  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:12:17.574653  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:12:17.580458  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:12:17.586392  156414 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:12:17.592304  156414 kubeadm.go:392] StartCluster: {Name:embed-certs-368536 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-368536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:12:17.592422  156414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:12:17.592467  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.631628  156414 cri.go:89] found id: ""
	I0729 19:12:17.631701  156414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:12:17.642636  156414 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:12:17.642665  156414 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:12:17.642731  156414 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:12:17.652551  156414 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:12:17.653603  156414 kubeconfig.go:125] found "embed-certs-368536" server: "https://192.168.50.95:8443"
	I0729 19:12:17.656022  156414 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:12:17.666987  156414 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.95
	I0729 19:12:17.667014  156414 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:12:17.667026  156414 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:12:17.667065  156414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:12:17.709534  156414 cri.go:89] found id: ""
	I0729 19:12:17.709598  156414 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:12:17.729709  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:12:17.739968  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:12:17.739990  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:12:17.740051  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:12:17.749727  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:12:17.749794  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:12:17.760013  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:12:17.781070  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:12:17.781135  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:12:17.794747  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.805862  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:12:17.805938  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:12:17.815892  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:12:17.825005  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:12:17.825072  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:12:17.834745  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:12:17.844191  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:17.963586  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.325254  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.361626819s)
	I0729 19:12:19.325289  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.537565  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.611697  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:19.710291  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:12:19.710408  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.210809  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.711234  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:12:20.727361  156414 api_server.go:72] duration metric: took 1.017067714s to wait for apiserver process to appear ...
	I0729 19:12:20.727396  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:12:20.727432  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:20.727961  156414 api_server.go:269] stopped: https://192.168.50.95:8443/healthz: Get "https://192.168.50.95:8443/healthz": dial tcp 192.168.50.95:8443: connect: connection refused
	I0729 19:12:21.228408  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.622048  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.622093  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.622106  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.628174  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:12:23.628195  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:12:23.728403  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:23.732920  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:23.732969  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.227954  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.233109  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.233143  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:24.727641  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:24.735127  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:24.735156  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.227686  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.236914  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.236947  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:25.728500  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:25.735276  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:25.735307  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.227824  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.232439  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.232471  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:26.728521  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:26.732952  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:12:26.732989  156414 api_server.go:103] status: https://192.168.50.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:12:27.227504  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:12:27.232166  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:12:27.238505  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:12:27.238531  156414 api_server.go:131] duration metric: took 6.511128129s to wait for apiserver health ...
	I0729 19:12:27.238541  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:12:27.238560  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:12:27.240943  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:12:27.242526  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:12:27.254578  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:12:27.274700  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:12:27.289359  156414 system_pods.go:59] 8 kube-system pods found
	I0729 19:12:27.289412  156414 system_pods.go:61] "coredns-7db6d8ff4d-dww2j" [31418aeb-98be-4b43-a687-5c8f64ec5e9d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:12:27.289424  156414 system_pods.go:61] "etcd-embed-certs-368536" [0a7aac00-9161-473a-87f1-2702211beaac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:12:27.289443  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [d6638a87-43df-457e-b335-1ab2aa03f421] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:12:27.289469  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [76fefc23-7794-40fc-845a-73d83eb55450] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:12:27.289478  156414 system_pods.go:61] "kube-proxy-9xwt2" [f637605b-9bc2-4922-b801-04f681a81e7c] Running
	I0729 19:12:27.289485  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [476e1d42-6610-44b5-b77b-b25537f044eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:12:27.289495  156414 system_pods.go:61] "metrics-server-569cc877fc-xnkwq" [19328b03-8c7d-499f-9a30-31f023605e49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:12:27.289500  156414 system_pods.go:61] "storage-provisioner" [b049d065-fbb1-48b6-a377-2938a0519c78] Running
	I0729 19:12:27.289513  156414 system_pods.go:74] duration metric: took 14.789212ms to wait for pod list to return data ...
	I0729 19:12:27.289525  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:12:27.292692  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:12:27.292716  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:12:27.292809  156414 node_conditions.go:105] duration metric: took 3.278515ms to run NodePressure ...
	I0729 19:12:27.292825  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:12:27.563167  156414 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567532  156414 kubeadm.go:739] kubelet initialised
	I0729 19:12:27.567552  156414 kubeadm.go:740] duration metric: took 4.363687ms waiting for restarted kubelet to initialise ...
	I0729 19:12:27.567566  156414 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:12:27.573549  156414 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:29.580511  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:32.080352  156414 pod_ready.go:102] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:34.079542  156414 pod_ready.go:92] pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:34.079564  156414 pod_ready.go:81] duration metric: took 6.505994438s for pod "coredns-7db6d8ff4d-dww2j" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:34.079574  156414 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:36.086785  156414 pod_ready.go:102] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:37.089587  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.089611  156414 pod_ready.go:81] duration metric: took 3.010030448s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.089621  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093814  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.093833  156414 pod_ready.go:81] duration metric: took 4.206508ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.093842  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097603  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.097621  156414 pod_ready.go:81] duration metric: took 3.772149ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.097633  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101696  156414 pod_ready.go:92] pod "kube-proxy-9xwt2" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.101720  156414 pod_ready.go:81] duration metric: took 4.078653ms for pod "kube-proxy-9xwt2" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.101732  156414 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107711  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:12:37.107728  156414 pod_ready.go:81] duration metric: took 5.989066ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:37.107738  156414 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	I0729 19:12:39.113796  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:41.115401  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:43.614461  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:45.614900  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:48.113738  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:50.114364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:52.114790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:54.613472  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:56.613889  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:12:59.114206  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:01.614516  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:04.114859  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:06.114993  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:08.615213  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:11.114015  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:13.114342  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:15.614423  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:18.114442  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:20.614465  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:23.117909  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:25.614155  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:28.114746  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:30.613689  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:32.614790  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:35.113716  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:37.116362  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:39.614545  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:42.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:44.114654  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:46.114765  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:48.615523  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:51.114096  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:53.114180  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:55.614278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:13:58.114138  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:00.613679  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:02.614878  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:05.117278  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:07.614025  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:09.614681  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:12.114414  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:14.114458  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:16.614364  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:19.114533  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:21.613756  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:24.114325  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:26.614276  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:29.114137  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:31.114274  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:33.115749  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:35.614067  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:37.614374  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:39.615618  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:42.114139  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:44.114503  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:46.114624  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:48.613926  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:50.614527  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:53.115129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:55.613563  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:57.615164  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:14:59.616129  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:02.114384  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:04.114621  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:06.114864  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:08.115242  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:10.613949  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:13.115359  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:15.614560  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:17.615109  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:20.114341  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:22.115253  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:24.119792  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:26.614361  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:29.113806  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:31.114150  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:33.614207  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:35.616204  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:38.113264  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:40.615054  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:42.615127  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:45.115119  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:47.613589  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:49.613803  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:51.615235  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:54.113908  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:56.614614  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:15:59.114193  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:01.614642  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:04.114186  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:06.614156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:08.614216  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:10.615368  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:13.116263  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:15.613987  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:17.614183  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:19.617124  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:22.114156  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:24.613643  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:26.613720  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:28.616174  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:31.114289  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:33.114818  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:35.614735  156414 pod_ready.go:102] pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace has status "Ready":"False"
	I0729 19:16:37.107998  156414 pod_ready.go:81] duration metric: took 4m0.000241864s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" ...
	E0729 19:16:37.108045  156414 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-xnkwq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:16:37.108068  156414 pod_ready.go:38] duration metric: took 4m9.540493845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:16:37.108105  156414 kubeadm.go:597] duration metric: took 4m19.465427343s to restartPrimaryControlPlane
	W0729 19:16:37.108167  156414 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:16:37.108196  156414 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:17:08.548650  156414 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.44042578s)
	I0729 19:17:08.548730  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:17:08.564620  156414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:17:08.575061  156414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:17:08.585537  156414 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:17:08.585566  156414 kubeadm.go:157] found existing configuration files:
	
	I0729 19:17:08.585610  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:17:08.594641  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:17:08.594702  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:17:08.604434  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:17:08.613126  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:17:08.613177  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:17:08.622123  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:17:08.630620  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:17:08.630661  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:17:08.640140  156414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:17:08.648712  156414 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:17:08.648768  156414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:17:08.658010  156414 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:17:08.709849  156414 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:17:08.709998  156414 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:17:08.850515  156414 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:17:08.850632  156414 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:17:08.850769  156414 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:17:09.057782  156414 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:17:09.059421  156414 out.go:204]   - Generating certificates and keys ...
	I0729 19:17:09.059494  156414 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:17:09.059566  156414 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:17:09.059636  156414 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:17:09.062277  156414 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:17:09.062401  156414 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:17:09.062475  156414 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:17:09.062526  156414 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:17:09.062616  156414 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:17:09.062695  156414 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:17:09.062807  156414 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:17:09.062863  156414 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:17:09.062933  156414 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:17:09.426782  156414 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:17:09.599745  156414 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:17:09.741530  156414 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:17:09.907315  156414 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:17:10.118045  156414 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:17:10.118623  156414 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:17:10.121594  156414 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:17:10.124052  156414 out.go:204]   - Booting up control plane ...
	I0729 19:17:10.124173  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:17:10.124267  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:17:10.124374  156414 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:17:10.144903  156414 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:17:10.145010  156414 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:17:10.145047  156414 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:17:10.278905  156414 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:17:10.279025  156414 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:17:11.280964  156414 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002120381s
	I0729 19:17:11.281070  156414 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:17:15.782460  156414 kubeadm.go:310] [api-check] The API server is healthy after 4.501562605s
	I0729 19:17:15.804614  156414 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:17:15.822230  156414 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:17:15.849613  156414 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:17:15.849870  156414 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-368536 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:17:15.861910  156414 kubeadm.go:310] [bootstrap-token] Using token: zhramo.fqhnhxuylehyq043
	I0729 19:17:15.863215  156414 out.go:204]   - Configuring RBAC rules ...
	I0729 19:17:15.863352  156414 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:17:15.870893  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:17:15.886779  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:17:15.889933  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:17:15.893111  156414 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:17:15.895970  156414 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:17:16.200928  156414 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:17:16.625621  156414 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:17:17.195772  156414 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:17:17.197712  156414 kubeadm.go:310] 
	I0729 19:17:17.197780  156414 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:17:17.197791  156414 kubeadm.go:310] 
	I0729 19:17:17.197874  156414 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:17:17.197885  156414 kubeadm.go:310] 
	I0729 19:17:17.197925  156414 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:17:17.198023  156414 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:17:17.198108  156414 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:17:17.198120  156414 kubeadm.go:310] 
	I0729 19:17:17.198190  156414 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:17:17.198200  156414 kubeadm.go:310] 
	I0729 19:17:17.198258  156414 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:17:17.198267  156414 kubeadm.go:310] 
	I0729 19:17:17.198347  156414 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:17:17.198451  156414 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:17:17.198529  156414 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:17:17.198539  156414 kubeadm.go:310] 
	I0729 19:17:17.198633  156414 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:17:17.198750  156414 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:17:17.198761  156414 kubeadm.go:310] 
	I0729 19:17:17.198895  156414 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zhramo.fqhnhxuylehyq043 \
	I0729 19:17:17.199041  156414 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f \
	I0729 19:17:17.199074  156414 kubeadm.go:310] 	--control-plane 
	I0729 19:17:17.199081  156414 kubeadm.go:310] 
	I0729 19:17:17.199199  156414 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:17:17.199210  156414 kubeadm.go:310] 
	I0729 19:17:17.199327  156414 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zhramo.fqhnhxuylehyq043 \
	I0729 19:17:17.199478  156414 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:13f0bc092dc09b7291dec830fc09c94db5ac1707dd26df6091df80009af3af7f 
	I0729 19:17:17.200591  156414 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:17:17.200629  156414 cni.go:84] Creating CNI manager for ""
	I0729 19:17:17.200642  156414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:17:17.202541  156414 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:17:17.203847  156414 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:17:17.214711  156414 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:17:17.233233  156414 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:17:17.233330  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:17.233332  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-368536 minikube.k8s.io/updated_at=2024_07_29T19_17_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35 minikube.k8s.io/name=embed-certs-368536 minikube.k8s.io/primary=true
	I0729 19:17:17.265931  156414 ops.go:34] apiserver oom_adj: -16
	I0729 19:17:17.410594  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:17.911585  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:18.410650  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:18.911432  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:19.411062  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:19.911629  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:20.411050  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:20.911004  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:21.411031  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:21.910787  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:22.411228  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:22.911181  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:23.410624  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:23.910844  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:24.411409  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:24.910745  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:25.410675  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:25.910901  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:26.411562  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:26.911505  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:27.411552  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:27.910916  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:28.410868  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:28.911466  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.410633  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.911613  156414 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:17:29.992725  156414 kubeadm.go:1113] duration metric: took 12.75946311s to wait for elevateKubeSystemPrivileges
	I0729 19:17:29.992767  156414 kubeadm.go:394] duration metric: took 5m12.400472687s to StartCluster
	I0729 19:17:29.992793  156414 settings.go:142] acquiring lock: {Name:mk5599d3fc2f664ed5eea99f33b4436f64ab8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:17:29.992902  156414 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 19:17:29.994489  156414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/kubeconfig: {Name:mk6702c0db404329102489655bdd2ff03ad6e919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:17:29.994792  156414 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.95 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:17:29.994828  156414 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:17:29.994917  156414 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-368536"
	I0729 19:17:29.994954  156414 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-368536"
	I0729 19:17:29.994957  156414 addons.go:69] Setting default-storageclass=true in profile "embed-certs-368536"
	W0729 19:17:29.994966  156414 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:17:29.994969  156414 addons.go:69] Setting metrics-server=true in profile "embed-certs-368536"
	I0729 19:17:29.995004  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:29.995003  156414 config.go:182] Loaded profile config "embed-certs-368536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:17:29.995028  156414 addons.go:234] Setting addon metrics-server=true in "embed-certs-368536"
	W0729 19:17:29.995041  156414 addons.go:243] addon metrics-server should already be in state true
	I0729 19:17:29.994986  156414 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-368536"
	I0729 19:17:29.995073  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:29.995409  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995457  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.995460  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995487  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.995487  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:29.995636  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:29.997279  156414 out.go:177] * Verifying Kubernetes components...
	I0729 19:17:29.998614  156414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:17:30.011510  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I0729 19:17:30.011717  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0729 19:17:30.011970  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.012063  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.012480  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.012505  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.012626  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.012651  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.012967  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.013105  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.013284  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.013527  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.013574  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.014086  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0729 19:17:30.014502  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.015001  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.015018  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.015505  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.016720  156414 addons.go:234] Setting addon default-storageclass=true in "embed-certs-368536"
	W0729 19:17:30.016740  156414 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:17:30.016770  156414 host.go:66] Checking if "embed-certs-368536" exists ...
	I0729 19:17:30.017091  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.017123  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.017432  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.017477  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.034798  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0729 19:17:30.035372  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.036179  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.036207  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.037055  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37901
	I0729 19:17:30.037161  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0729 19:17:30.036581  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.037493  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.037581  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.037636  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.038047  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.038056  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.038073  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.038217  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.038403  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.038623  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.038627  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.039185  156414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:17:30.039221  156414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:17:30.040574  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.040687  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.042879  156414 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:17:30.042873  156414 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:17:30.044279  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:17:30.044298  156414 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:17:30.044324  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.044544  156414 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:17:30.044593  156414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:17:30.044621  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.048075  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048402  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048442  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.048462  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.048613  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.048761  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.048845  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.048890  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.048914  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.049132  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.049289  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.049306  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.049441  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.049593  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.055718  156414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34081
	I0729 19:17:30.056086  156414 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:17:30.056521  156414 main.go:141] libmachine: Using API Version  1
	I0729 19:17:30.056546  156414 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:17:30.056931  156414 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:17:30.057098  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetState
	I0729 19:17:30.058559  156414 main.go:141] libmachine: (embed-certs-368536) Calling .DriverName
	I0729 19:17:30.058795  156414 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:17:30.058810  156414 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:17:30.058825  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHHostname
	I0729 19:17:30.061253  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.061842  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHPort
	I0729 19:17:30.061880  156414 main.go:141] libmachine: (embed-certs-368536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:e7:e8", ip: ""} in network mk-embed-certs-368536: {Iface:virbr4 ExpiryTime:2024-07-29 20:03:02 +0000 UTC Type:0 Mac:52:54:00:86:e7:e8 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:embed-certs-368536 Clientid:01:52:54:00:86:e7:e8}
	I0729 19:17:30.061900  156414 main.go:141] libmachine: (embed-certs-368536) DBG | domain embed-certs-368536 has defined IP address 192.168.50.95 and MAC address 52:54:00:86:e7:e8 in network mk-embed-certs-368536
	I0729 19:17:30.062053  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHKeyPath
	I0729 19:17:30.062195  156414 main.go:141] libmachine: (embed-certs-368536) Calling .GetSSHUsername
	I0729 19:17:30.062346  156414 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/embed-certs-368536/id_rsa Username:docker}
	I0729 19:17:30.192595  156414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:17:30.208960  156414 node_ready.go:35] waiting up to 6m0s for node "embed-certs-368536" to be "Ready" ...
	I0729 19:17:30.216230  156414 node_ready.go:49] node "embed-certs-368536" has status "Ready":"True"
	I0729 19:17:30.216247  156414 node_ready.go:38] duration metric: took 7.255724ms for node "embed-certs-368536" to be "Ready" ...
	I0729 19:17:30.216256  156414 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:17:30.219988  156414 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.224074  156414 pod_ready.go:92] pod "etcd-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.224099  156414 pod_ready.go:81] duration metric: took 4.088257ms for pod "etcd-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.224109  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.228389  156414 pod_ready.go:92] pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.228409  156414 pod_ready.go:81] duration metric: took 4.292723ms for pod "kube-apiserver-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.228417  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.233616  156414 pod_ready.go:92] pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.233634  156414 pod_ready.go:81] duration metric: took 5.212376ms for pod "kube-controller-manager-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.233642  156414 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.242933  156414 pod_ready.go:92] pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace has status "Ready":"True"
	I0729 19:17:30.242951  156414 pod_ready.go:81] duration metric: took 9.302507ms for pod "kube-scheduler-embed-certs-368536" in "kube-system" namespace to be "Ready" ...
	I0729 19:17:30.242959  156414 pod_ready.go:38] duration metric: took 26.692394ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:17:30.242973  156414 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:17:30.243016  156414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:17:30.261484  156414 api_server.go:72] duration metric: took 266.652937ms to wait for apiserver process to appear ...
	I0729 19:17:30.261513  156414 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:17:30.261534  156414 api_server.go:253] Checking apiserver healthz at https://192.168.50.95:8443/healthz ...
	I0729 19:17:30.269760  156414 api_server.go:279] https://192.168.50.95:8443/healthz returned 200:
	ok
	I0729 19:17:30.270848  156414 api_server.go:141] control plane version: v1.30.3
	I0729 19:17:30.270872  156414 api_server.go:131] duration metric: took 9.352433ms to wait for apiserver health ...
	I0729 19:17:30.270880  156414 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:17:30.312744  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:17:30.317547  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:17:30.317570  156414 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:17:30.332468  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:17:30.352498  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:17:30.352531  156414 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:17:30.392028  156414 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:17:30.392055  156414 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:17:30.413559  156414 system_pods.go:59] 4 kube-system pods found
	I0729 19:17:30.413586  156414 system_pods.go:61] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:30.413591  156414 system_pods.go:61] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:30.413595  156414 system_pods.go:61] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:30.413598  156414 system_pods.go:61] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:30.413603  156414 system_pods.go:74] duration metric: took 142.71846ms to wait for pod list to return data ...
	I0729 19:17:30.413610  156414 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:17:30.424371  156414 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:17:30.615212  156414 default_sa.go:45] found service account: "default"
	I0729 19:17:30.615237  156414 default_sa.go:55] duration metric: took 201.621467ms for default service account to be created ...
	I0729 19:17:30.615246  156414 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:17:30.831144  156414 system_pods.go:86] 4 kube-system pods found
	I0729 19:17:30.831175  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:30.831182  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:30.831186  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:30.831190  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:30.831210  156414 retry.go:31] will retry after 301.650623ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.127532  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127599  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127595  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127620  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127910  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.127925  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.127935  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.127943  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.127958  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.127974  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.127985  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.127999  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.128008  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.128212  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.128221  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.128440  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.128455  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.128467  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.155504  156414 system_pods.go:86] 8 kube-system pods found
	I0729 19:17:31.155543  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.155559  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.155565  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.155570  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.155575  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.155580  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:17:31.155586  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.155590  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending
	I0729 19:17:31.155606  156414 retry.go:31] will retry after 310.574298ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.159525  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.159546  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.160952  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.160961  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.160976  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.346360  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.346390  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.346700  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.346718  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.346732  156414 main.go:141] libmachine: Making call to close driver server
	I0729 19:17:31.346742  156414 main.go:141] libmachine: (embed-certs-368536) Calling .Close
	I0729 19:17:31.347006  156414 main.go:141] libmachine: (embed-certs-368536) DBG | Closing plugin on server side
	I0729 19:17:31.347052  156414 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:17:31.347059  156414 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:17:31.347075  156414 addons.go:475] Verifying addon metrics-server=true in "embed-certs-368536"
	I0729 19:17:31.348884  156414 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:17:31.350473  156414 addons.go:510] duration metric: took 1.355642198s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:17:31.473514  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:31.473553  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.473561  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.473567  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.473573  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.473578  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.473583  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:17:31.473587  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.473596  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:31.473605  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:31.473622  156414 retry.go:31] will retry after 446.790872ms: missing components: kube-dns, kube-proxy
	I0729 19:17:31.928348  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:31.928381  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.928389  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:31.928396  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:31.928401  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:31.928406  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:31.928409  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:31.928413  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:31.928420  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:31.928429  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:31.928444  156414 retry.go:31] will retry after 467.830899ms: missing components: kube-dns
	I0729 19:17:32.403619  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:32.403649  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:32.403659  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:32.403665  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:32.403670  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:32.403676  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:32.403683  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:32.403689  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:32.403697  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:32.403706  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:17:32.403729  156414 retry.go:31] will retry after 745.010861ms: missing components: kube-dns
	I0729 19:17:33.163660  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:33.163697  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:33.163710  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:17:33.163719  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:33.163733  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:33.163740  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:33.163746  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:33.163751  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:33.163761  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:33.163770  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Running
	I0729 19:17:33.163791  156414 retry.go:31] will retry after 658.944312ms: missing components: kube-dns
	I0729 19:17:33.830608  156414 system_pods.go:86] 9 kube-system pods found
	I0729 19:17:33.830643  156414 system_pods.go:89] "coredns-7db6d8ff4d-ds92x" [81db7fca-a759-47fc-bea8-697adcb09763] Running
	I0729 19:17:33.830650  156414 system_pods.go:89] "coredns-7db6d8ff4d-gnrvx" [b3edf97c-8085-4d56-a12d-a6daa3782004] Running
	I0729 19:17:33.830656  156414 system_pods.go:89] "etcd-embed-certs-368536" [d1a1b671-c852-4a69-80fe-9ad13fffc01b] Running
	I0729 19:17:33.830662  156414 system_pods.go:89] "kube-apiserver-embed-certs-368536" [3b9307df-4472-499d-8a82-78f0f342e745] Running
	I0729 19:17:33.830670  156414 system_pods.go:89] "kube-controller-manager-embed-certs-368536" [89bf9b90-6735-417d-9f30-13eacb946ef4] Running
	I0729 19:17:33.830675  156414 system_pods.go:89] "kube-proxy-rxqlm" [1b66638b-5fb8-4bae-a129-8a1fb54389f4] Running
	I0729 19:17:33.830682  156414 system_pods.go:89] "kube-scheduler-embed-certs-368536" [a44062e3-c937-4765-9dfc-858cacbd3a90] Running
	I0729 19:17:33.830692  156414 system_pods.go:89] "metrics-server-569cc877fc-9z4tp" [1382116a-be81-46cb-92b8-ae335164f846] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:17:33.830703  156414 system_pods.go:89] "storage-provisioner" [83d64a91-164a-45b2-9a82-d826e64e6cbd] Running
	I0729 19:17:33.830714  156414 system_pods.go:126] duration metric: took 3.215460876s to wait for k8s-apps to be running ...
	I0729 19:17:33.830726  156414 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:17:33.830824  156414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:17:33.847810  156414 system_svc.go:56] duration metric: took 17.074145ms WaitForService to wait for kubelet
	I0729 19:17:33.847837  156414 kubeadm.go:582] duration metric: took 3.853011216s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:17:33.847861  156414 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:17:33.850180  156414 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:17:33.850198  156414 node_conditions.go:123] node cpu capacity is 2
	I0729 19:17:33.850209  156414 node_conditions.go:105] duration metric: took 2.342951ms to run NodePressure ...
	I0729 19:17:33.850221  156414 start.go:241] waiting for startup goroutines ...
	I0729 19:17:33.850230  156414 start.go:246] waiting for cluster config update ...
	I0729 19:17:33.850242  156414 start.go:255] writing updated cluster config ...
	I0729 19:17:33.850512  156414 ssh_runner.go:195] Run: rm -f paused
	I0729 19:17:33.898396  156414 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:17:33.899771  156414 out.go:177] * Done! kubectl is now configured to use "embed-certs-368536" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.730143873Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8aba0b8-dd42-4c38-96bd-8b072d3e1327 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.731708469Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfbc80cc-0245-4472-8710-f8b772d7af0c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.732376976Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281569732352471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfbc80cc-0245-4472-8710-f8b772d7af0c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.733038323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd776e03-a2ec-44a0-922d-95773e5bd897 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.733108258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd776e03-a2ec-44a0-922d-95773e5bd897 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.733327722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6538892eefc87c3409d41c364973087f89d912202bbd19204083f1d856b3881,PodSandboxId:82d2c8bcaec4e003aa086b4c64d6027ca46d08f00fe8d952e5f0f45986f4dd74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653166131866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ds92x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81db7fca-a759-47fc-bea8-697adcb09763,},Annotations:map[string]string{io.kubernetes.container.hash: 858a3191,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a29c89e8d7b18ca5f1d479153563b6e102fa2b7d64a3b0e573196f7cbaf989,PodSandboxId:3238a9e254c813a323ea1741d89ac974dbfb3754995f117e263043f91c2c4c49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653109115416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrvx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b3edf97c-8085-4d56-a12d-a6daa3782004,},Annotations:map[string]string{io.kubernetes.container.hash: 25bf3c5d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d51ed1bbac5c274a39024d0bad3a9452d6441b7ecdcb63f8092a7e50ed3210,PodSandboxId:e61c25a79797527e318e288f4c195021b23e9469d2bc41b628b00dff8d54c1be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722280651644152691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d64a91-164a-45b2-9a82-d826e64e6cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 69c8f71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd8c28c3f1568d0a5b315626538f4a9d96c454a2a0bfd77cdc1311e922494f,PodSandboxId:7620b4ba07280df59d1dc9961b559d33b14970d0ea1c054e94590c9dcb5cf54e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722280651414223075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxqlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b66638b-5fb8-4bae-a129-8a1fb54389f4,},Annotations:map[string]string{io.kubernetes.container.hash: 92b80f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcda9b2e1dc9744276cda6fee20af425aa842a19ad8760203adac5ffa740008,PodSandboxId:806d4f9abd61a74145d3ce9d948e41a786cdd31f1a7e08d3562a703892e9e273,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280631593598504,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bac9cb10bd39355f93f79a1906e9e97,},Annotations:map[string]string{io.kubernetes.container.hash: 7015d511,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600489eb286eac55c593cb64fe454ef94937857bc7be1de102a6f56a76604b2b,PodSandboxId:391c4528b164d0ee88276bef3140492d234a9621de7d7ba56c9d02b4169e0e0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280631563623279,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91b7a33a8ce7c8a88aef4dd4c6e195e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1841eea64962819c42c74aa05055e43f5f2eb90205b000ed03460683caf85065,PodSandboxId:a9f6a2ce03c7314db1e75fafc80a5f967443568f5db00a812415419e9927922b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280631536656428,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868148f3454fd7df7aeaf1387b63fd517d97e1cc6f64b01c1d3e9a82c0e876,PodSandboxId:359254c3e62383d1d2f61b9cd235a08509cdecf5db99244f91c6541e3f8b64f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280631527294058,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71fb0d9dc40452f3a849de813c6e179,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60b3d1fca4830e462e9f2f4362caaabfc8186d78ecddbe6108ffba50a30a9a5,PodSandboxId:ffdffecccfc3f8b332033dbb5baecfdcb19a3e62c604ebc8256464baf118188b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722280340337352693,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd776e03-a2ec-44a0-922d-95773e5bd897 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.770222657Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0db581d-ca2b-4cc0-8952-b1ad59900df6 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.770293021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0db581d-ca2b-4cc0-8952-b1ad59900df6 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.772093217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3e90ad5-d04a-4398-a848-70b55f712246 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.772494281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281569772474174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3e90ad5-d04a-4398-a848-70b55f712246 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.773231406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91400431-cbcc-46dd-a338-92e6f42fa096 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.773283236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91400431-cbcc-46dd-a338-92e6f42fa096 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.773474439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6538892eefc87c3409d41c364973087f89d912202bbd19204083f1d856b3881,PodSandboxId:82d2c8bcaec4e003aa086b4c64d6027ca46d08f00fe8d952e5f0f45986f4dd74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653166131866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ds92x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81db7fca-a759-47fc-bea8-697adcb09763,},Annotations:map[string]string{io.kubernetes.container.hash: 858a3191,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a29c89e8d7b18ca5f1d479153563b6e102fa2b7d64a3b0e573196f7cbaf989,PodSandboxId:3238a9e254c813a323ea1741d89ac974dbfb3754995f117e263043f91c2c4c49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653109115416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrvx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b3edf97c-8085-4d56-a12d-a6daa3782004,},Annotations:map[string]string{io.kubernetes.container.hash: 25bf3c5d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d51ed1bbac5c274a39024d0bad3a9452d6441b7ecdcb63f8092a7e50ed3210,PodSandboxId:e61c25a79797527e318e288f4c195021b23e9469d2bc41b628b00dff8d54c1be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722280651644152691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d64a91-164a-45b2-9a82-d826e64e6cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 69c8f71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd8c28c3f1568d0a5b315626538f4a9d96c454a2a0bfd77cdc1311e922494f,PodSandboxId:7620b4ba07280df59d1dc9961b559d33b14970d0ea1c054e94590c9dcb5cf54e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722280651414223075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxqlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b66638b-5fb8-4bae-a129-8a1fb54389f4,},Annotations:map[string]string{io.kubernetes.container.hash: 92b80f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcda9b2e1dc9744276cda6fee20af425aa842a19ad8760203adac5ffa740008,PodSandboxId:806d4f9abd61a74145d3ce9d948e41a786cdd31f1a7e08d3562a703892e9e273,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280631593598504,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bac9cb10bd39355f93f79a1906e9e97,},Annotations:map[string]string{io.kubernetes.container.hash: 7015d511,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600489eb286eac55c593cb64fe454ef94937857bc7be1de102a6f56a76604b2b,PodSandboxId:391c4528b164d0ee88276bef3140492d234a9621de7d7ba56c9d02b4169e0e0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280631563623279,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91b7a33a8ce7c8a88aef4dd4c6e195e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1841eea64962819c42c74aa05055e43f5f2eb90205b000ed03460683caf85065,PodSandboxId:a9f6a2ce03c7314db1e75fafc80a5f967443568f5db00a812415419e9927922b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280631536656428,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868148f3454fd7df7aeaf1387b63fd517d97e1cc6f64b01c1d3e9a82c0e876,PodSandboxId:359254c3e62383d1d2f61b9cd235a08509cdecf5db99244f91c6541e3f8b64f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280631527294058,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71fb0d9dc40452f3a849de813c6e179,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60b3d1fca4830e462e9f2f4362caaabfc8186d78ecddbe6108ffba50a30a9a5,PodSandboxId:ffdffecccfc3f8b332033dbb5baecfdcb19a3e62c604ebc8256464baf118188b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722280340337352693,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91400431-cbcc-46dd-a338-92e6f42fa096 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.806153329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c21b0eac-8469-42ca-8361-5198f9d21284 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.806223029Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c21b0eac-8469-42ca-8361-5198f9d21284 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.807745235Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=382802e7-24e0-4998-b87c-32867c7319db name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.808324877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281569808302399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=382802e7-24e0-4998-b87c-32867c7319db name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.808859577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7aa08d55-cc41-42e5-be08-a2b0aa48b11e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.808991880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7aa08d55-cc41-42e5-be08-a2b0aa48b11e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.809298259Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6538892eefc87c3409d41c364973087f89d912202bbd19204083f1d856b3881,PodSandboxId:82d2c8bcaec4e003aa086b4c64d6027ca46d08f00fe8d952e5f0f45986f4dd74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653166131866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ds92x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81db7fca-a759-47fc-bea8-697adcb09763,},Annotations:map[string]string{io.kubernetes.container.hash: 858a3191,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a29c89e8d7b18ca5f1d479153563b6e102fa2b7d64a3b0e573196f7cbaf989,PodSandboxId:3238a9e254c813a323ea1741d89ac974dbfb3754995f117e263043f91c2c4c49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653109115416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrvx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b3edf97c-8085-4d56-a12d-a6daa3782004,},Annotations:map[string]string{io.kubernetes.container.hash: 25bf3c5d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d51ed1bbac5c274a39024d0bad3a9452d6441b7ecdcb63f8092a7e50ed3210,PodSandboxId:e61c25a79797527e318e288f4c195021b23e9469d2bc41b628b00dff8d54c1be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722280651644152691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d64a91-164a-45b2-9a82-d826e64e6cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 69c8f71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd8c28c3f1568d0a5b315626538f4a9d96c454a2a0bfd77cdc1311e922494f,PodSandboxId:7620b4ba07280df59d1dc9961b559d33b14970d0ea1c054e94590c9dcb5cf54e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722280651414223075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxqlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b66638b-5fb8-4bae-a129-8a1fb54389f4,},Annotations:map[string]string{io.kubernetes.container.hash: 92b80f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcda9b2e1dc9744276cda6fee20af425aa842a19ad8760203adac5ffa740008,PodSandboxId:806d4f9abd61a74145d3ce9d948e41a786cdd31f1a7e08d3562a703892e9e273,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280631593598504,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bac9cb10bd39355f93f79a1906e9e97,},Annotations:map[string]string{io.kubernetes.container.hash: 7015d511,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600489eb286eac55c593cb64fe454ef94937857bc7be1de102a6f56a76604b2b,PodSandboxId:391c4528b164d0ee88276bef3140492d234a9621de7d7ba56c9d02b4169e0e0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280631563623279,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91b7a33a8ce7c8a88aef4dd4c6e195e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1841eea64962819c42c74aa05055e43f5f2eb90205b000ed03460683caf85065,PodSandboxId:a9f6a2ce03c7314db1e75fafc80a5f967443568f5db00a812415419e9927922b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280631536656428,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868148f3454fd7df7aeaf1387b63fd517d97e1cc6f64b01c1d3e9a82c0e876,PodSandboxId:359254c3e62383d1d2f61b9cd235a08509cdecf5db99244f91c6541e3f8b64f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280631527294058,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71fb0d9dc40452f3a849de813c6e179,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60b3d1fca4830e462e9f2f4362caaabfc8186d78ecddbe6108ffba50a30a9a5,PodSandboxId:ffdffecccfc3f8b332033dbb5baecfdcb19a3e62c604ebc8256464baf118188b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722280340337352693,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7aa08d55-cc41-42e5-be08-a2b0aa48b11e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.825222182Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=24dde84a-54a0-4f52-9758-b90bd0ab0f1a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.825607282Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:82d2c8bcaec4e003aa086b4c64d6027ca46d08f00fe8d952e5f0f45986f4dd74,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ds92x,Uid:81db7fca-a759-47fc-bea8-697adcb09763,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722280652845613217,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ds92x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81db7fca-a759-47fc-bea8-697adcb09763,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:17:31.036668895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3238a9e254c813a323ea1741d89ac974dbfb3754995f117e263043f91c2c4c49,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnrvx,Uid:b3edf97c-8085-4d56-a12d-a6daa3782004,Namespace:kube-s
ystem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722280652822346405,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3edf97c-8085-4d56-a12d-a6daa3782004,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:17:31.010585336Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00e073730dbe316682a785b2e3bfef93a602e92860d7f45bff2105c96fd10d82,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-9z4tp,Uid:1382116a-be81-46cb-92b8-ae335164f846,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722280651501947831,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-9z4tp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1382116a-be81-46cb-92b8-ae335164f846,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:m
ap[string]string{kubernetes.io/config.seen: 2024-07-29T19:17:31.191336321Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e61c25a79797527e318e288f4c195021b23e9469d2bc41b628b00dff8d54c1be,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:83d64a91-164a-45b2-9a82-d826e64e6cbd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722280651454464640,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d64a91-164a-45b2-9a82-d826e64e6cbd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":
[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T19:17:31.141348744Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7620b4ba07280df59d1dc9961b559d33b14970d0ea1c054e94590c9dcb5cf54e,Metadata:&PodSandboxMetadata{Name:kube-proxy-rxqlm,Uid:1b66638b-5fb8-4bae-a129-8a1fb54389f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722280651221480550,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rxqlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b66638b-5fb8-4bae-a129-8a1fb54389f4,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:17:30.910360444Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:391c4528b164d0ee88276bef3140492d234a9621de7d7ba56c9d02b4169e0e0d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-368536,Uid:f91b7a33a8ce7c8a88aef4dd4c6e195e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722280631358810610,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91b7a33a8ce7c8a88aef4dd4c6e195e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f91b7a33a8ce7c8a88aef4dd4c6e195e,kubernetes.io/config.seen: 2024-07-29T19:17:10.905228074Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:359254c3e62383d1d2f61b9cd235a08509cdecf5db99244f91c6541e3f8b64f2,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-368536,Uid:b71fb0d9dc40452f3a849de813c6e179,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722280631354725953,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71fb0d9dc40452f3a849de813c6e179,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b71fb0d9dc40452f3a849de813c6e179,kubernetes.io/config.seen: 2024-07-29T19:17:10.905227127Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9f6a2ce03c7314db1e75fafc80a5f967443568f5db00a812415419e9927922b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-368536,Uid:d9282a3d713124baa99437d84a975f77,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722280631342174728,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.95:8443,kubernetes.io/config.hash: d9282a3d713124baa99437d84a975f77,kubernetes.io/config.seen: 2024-07-29T19:17:10.905225821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:806d4f9abd61a74145d3ce9d948e41a786cdd31f1a7e08d3562a703892e9e273,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-368536,Uid:2bac9cb10bd39355f93f79a1906e9e97,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722280631337695034,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bac9cb10bd39355f93f79a1906e9e97,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50
.95:2379,kubernetes.io/config.hash: 2bac9cb10bd39355f93f79a1906e9e97,kubernetes.io/config.seen: 2024-07-29T19:17:10.905221498Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ffdffecccfc3f8b332033dbb5baecfdcb19a3e62c604ebc8256464baf118188b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-368536,Uid:d9282a3d713124baa99437d84a975f77,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722280340115474673,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.95:8443,kubernetes.io/config.hash: d9282a3d713124baa99437d84a975f77,kubernetes.io/config.seen: 2024-07-29T19:12:19.615417723Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collect
or/interceptors.go:74" id=24dde84a-54a0-4f52-9758-b90bd0ab0f1a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.826249310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=587850fd-bc7f-4813-9c52-5455bd943322 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.826325603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=587850fd-bc7f-4813-9c52-5455bd943322 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:32:49 embed-certs-368536 crio[735]: time="2024-07-29 19:32:49.826510412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6538892eefc87c3409d41c364973087f89d912202bbd19204083f1d856b3881,PodSandboxId:82d2c8bcaec4e003aa086b4c64d6027ca46d08f00fe8d952e5f0f45986f4dd74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653166131866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ds92x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81db7fca-a759-47fc-bea8-697adcb09763,},Annotations:map[string]string{io.kubernetes.container.hash: 858a3191,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a29c89e8d7b18ca5f1d479153563b6e102fa2b7d64a3b0e573196f7cbaf989,PodSandboxId:3238a9e254c813a323ea1741d89ac974dbfb3754995f117e263043f91c2c4c49,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280653109115416,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrvx,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b3edf97c-8085-4d56-a12d-a6daa3782004,},Annotations:map[string]string{io.kubernetes.container.hash: 25bf3c5d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d51ed1bbac5c274a39024d0bad3a9452d6441b7ecdcb63f8092a7e50ed3210,PodSandboxId:e61c25a79797527e318e288f4c195021b23e9469d2bc41b628b00dff8d54c1be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1722280651644152691,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d64a91-164a-45b2-9a82-d826e64e6cbd,},Annotations:map[string]string{io.kubernetes.container.hash: 69c8f71d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bd8c28c3f1568d0a5b315626538f4a9d96c454a2a0bfd77cdc1311e922494f,PodSandboxId:7620b4ba07280df59d1dc9961b559d33b14970d0ea1c054e94590c9dcb5cf54e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722280651414223075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxqlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b66638b-5fb8-4bae-a129-8a1fb54389f4,},Annotations:map[string]string{io.kubernetes.container.hash: 92b80f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcda9b2e1dc9744276cda6fee20af425aa842a19ad8760203adac5ffa740008,PodSandboxId:806d4f9abd61a74145d3ce9d948e41a786cdd31f1a7e08d3562a703892e9e273,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280631593598504,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bac9cb10bd39355f93f79a1906e9e97,},Annotations:map[string]string{io.kubernetes.container.hash: 7015d511,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600489eb286eac55c593cb64fe454ef94937857bc7be1de102a6f56a76604b2b,PodSandboxId:391c4528b164d0ee88276bef3140492d234a9621de7d7ba56c9d02b4169e0e0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280631563623279,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91b7a33a8ce7c8a88aef4dd4c6e195e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1841eea64962819c42c74aa05055e43f5f2eb90205b000ed03460683caf85065,PodSandboxId:a9f6a2ce03c7314db1e75fafc80a5f967443568f5db00a812415419e9927922b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280631536656428,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868148f3454fd7df7aeaf1387b63fd517d97e1cc6f64b01c1d3e9a82c0e876,PodSandboxId:359254c3e62383d1d2f61b9cd235a08509cdecf5db99244f91c6541e3f8b64f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280631527294058,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71fb0d9dc40452f3a849de813c6e179,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60b3d1fca4830e462e9f2f4362caaabfc8186d78ecddbe6108ffba50a30a9a5,PodSandboxId:ffdffecccfc3f8b332033dbb5baecfdcb19a3e62c604ebc8256464baf118188b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722280340337352693,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-368536,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9282a3d713124baa99437d84a975f77,},Annotations:map[string]string{io.kubernetes.container.hash: 2605caa3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=587850fd-bc7f-4813-9c52-5455bd943322 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6538892eefc8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   82d2c8bcaec4e       coredns-7db6d8ff4d-ds92x
	66a29c89e8d7b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   3238a9e254c81       coredns-7db6d8ff4d-gnrvx
	52d51ed1bbac5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   e61c25a797975       storage-provisioner
	b8bd8c28c3f15       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   15 minutes ago      Running             kube-proxy                0                   7620b4ba07280       kube-proxy-rxqlm
	6dcda9b2e1dc9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   806d4f9abd61a       etcd-embed-certs-368536
	600489eb286ea       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   15 minutes ago      Running             kube-scheduler            2                   391c4528b164d       kube-scheduler-embed-certs-368536
	1841eea649628       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   15 minutes ago      Running             kube-apiserver            2                   a9f6a2ce03c73       kube-apiserver-embed-certs-368536
	5d868148f3454       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   15 minutes ago      Running             kube-controller-manager   2                   359254c3e6238       kube-controller-manager-embed-certs-368536
	f60b3d1fca483       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   20 minutes ago      Exited              kube-apiserver            1                   ffdffecccfc3f       kube-apiserver-embed-certs-368536
	
	
	==> coredns [66a29c89e8d7b18ca5f1d479153563b6e102fa2b7d64a3b0e573196f7cbaf989] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e6538892eefc87c3409d41c364973087f89d912202bbd19204083f1d856b3881] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-368536
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-368536
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de53afae5e8f4269438099f4dad14d93a8a17e35
	                    minikube.k8s.io/name=embed-certs-368536
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_17_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:17:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-368536
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:32:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:27:49 +0000   Mon, 29 Jul 2024 19:17:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:27:49 +0000   Mon, 29 Jul 2024 19:17:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:27:49 +0000   Mon, 29 Jul 2024 19:17:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:27:49 +0000   Mon, 29 Jul 2024 19:17:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.95
	  Hostname:    embed-certs-368536
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8eee350f9b324193b8de34dbb432d91e
	  System UUID:                8eee350f-9b32-4193-b8de-34dbb432d91e
	  Boot ID:                    d0bedae2-8e93-4de8-9199-f4e1e7af4ab9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-ds92x                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-gnrvx                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-368536                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-368536             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-368536    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-rxqlm                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-368536             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-9z4tp               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-368536 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-368536 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-368536 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-368536 event: Registered Node embed-certs-368536 in Controller
	
	
	==> dmesg <==
	[  +0.040146] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.780007] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul29 19:12] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.490513] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.347328] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.066076] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075137] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.175373] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.146386] systemd-fstab-generator[689]: Ignoring "noauto" option for root device
	[  +0.286031] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +4.297208] systemd-fstab-generator[818]: Ignoring "noauto" option for root device
	[  +0.058855] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.365142] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +5.597558] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.612073] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.026927] kauditd_printk_skb: 27 callbacks suppressed
	[Jul29 19:17] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.662898] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +4.754849] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.332354] systemd-fstab-generator[3894]: Ignoring "noauto" option for root device
	[ +13.799696] systemd-fstab-generator[4109]: Ignoring "noauto" option for root device
	[  +0.101879] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 19:18] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6dcda9b2e1dc9744276cda6fee20af425aa842a19ad8760203adac5ffa740008] <==
	{"level":"info","ts":"2024-07-29T19:17:11.942561Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.95:2380"}
	{"level":"info","ts":"2024-07-29T19:17:12.772817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T19:17:12.773001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T19:17:12.773077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 received MsgPreVoteResp from 94e27d43a39d2148 at term 1"}
	{"level":"info","ts":"2024-07-29T19:17:12.773173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:17:12.773198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 received MsgVoteResp from 94e27d43a39d2148 at term 2"}
	{"level":"info","ts":"2024-07-29T19:17:12.773278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94e27d43a39d2148 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T19:17:12.773308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 94e27d43a39d2148 elected leader 94e27d43a39d2148 at term 2"}
	{"level":"info","ts":"2024-07-29T19:17:12.775169Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:17:12.776732Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"94e27d43a39d2148","local-member-attributes":"{Name:embed-certs-368536 ClientURLs:[https://192.168.50.95:2379]}","request-path":"/0/members/94e27d43a39d2148/attributes","cluster-id":"78c5ccfc677e9ba5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:17:12.777273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:17:12.777295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:17:12.777771Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:17:12.777818Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:17:12.777925Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"78c5ccfc677e9ba5","local-member-id":"94e27d43a39d2148","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:17:12.778019Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:17:12.778065Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:17:12.779938Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.95:2379"}
	{"level":"info","ts":"2024-07-29T19:17:12.782415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:27:12.818666Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-07-29T19:27:12.827489Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"8.43752ms","hash":67977032,"current-db-size-bytes":2297856,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2297856,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-29T19:27:12.827559Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":67977032,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T19:32:12.827055Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-07-29T19:32:12.831437Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":927,"took":"3.971014ms","hash":1879516336,"current-db-size-bytes":2297856,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1617920,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T19:32:12.831498Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1879516336,"revision":927,"compact-revision":684}
	
	
	==> kernel <==
	 19:32:50 up 20 min,  0 users,  load average: 0.38, 0.24, 0.14
	Linux embed-certs-368536 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1841eea64962819c42c74aa05055e43f5f2eb90205b000ed03460683caf85065] <==
	I0729 19:27:15.138176       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:28:15.136999       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:28:15.137070       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:28:15.137082       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:28:15.139441       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:28:15.139530       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:28:15.139537       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:30:15.137375       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:30:15.137556       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:30:15.137576       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:30:15.140710       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:30:15.140912       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:30:15.140959       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:32:14.141723       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:32:14.141924       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 19:32:15.143050       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:32:15.143139       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:32:15.143147       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:32:15.143221       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:32:15.143284       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:32:15.144293       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f60b3d1fca4830e462e9f2f4362caaabfc8186d78ecddbe6108ffba50a30a9a5] <==
	W0729 19:17:06.718210       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.806619       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.878550       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.902632       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.915684       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.941071       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:06.948599       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.016201       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.083579       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.218685       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.226395       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.267219       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.281542       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.380464       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.398657       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.419317       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.462260       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.534002       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.701507       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.739537       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.741960       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:07.945434       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:08.071350       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:08.074748       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:17:08.161146       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5d868148f3454fd7df7aeaf1387b63fd517d97e1cc6f64b01c1d3e9a82c0e876] <==
	I0729 19:27:00.690403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:27:30.211216       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:27:30.698786       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:28:00.216991       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:28:00.706550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:28:18.553721       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="224.954µs"
	I0729 19:28:29.557960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="172.706µs"
	E0729 19:28:30.224005       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:28:30.714251       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:29:00.228666       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:29:00.723508       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:29:30.233700       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:29:30.731257       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:30:00.242500       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:30:00.739647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:30:30.248046       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:30:30.749792       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:31:00.253255       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:31:00.759989       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:31:30.259989       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:31:30.767436       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:32:00.264472       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:32:00.775068       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:32:30.269942       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:32:30.784273       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b8bd8c28c3f1568d0a5b315626538f4a9d96c454a2a0bfd77cdc1311e922494f] <==
	I0729 19:17:31.724194       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:17:31.747326       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.95"]
	I0729 19:17:31.884657       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:17:31.884715       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:17:31.884731       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:17:31.888261       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:17:31.888587       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:17:31.888618       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:17:31.890053       1 config.go:192] "Starting service config controller"
	I0729 19:17:31.890126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:17:31.890162       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:17:31.890165       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:17:31.891059       1 config.go:319] "Starting node config controller"
	I0729 19:17:31.891088       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:17:31.990853       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:17:31.990903       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:17:31.991229       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [600489eb286eac55c593cb64fe454ef94937857bc7be1de102a6f56a76604b2b] <==
	W0729 19:17:14.153457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:17:14.153531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:17:14.153734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 19:17:14.153793       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:17:14.153828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 19:17:14.153846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:17:14.153853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:17:14.153806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 19:17:15.080411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:17:15.080462       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 19:17:15.113920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 19:17:15.113994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 19:17:15.131801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:17:15.131908       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 19:17:15.154984       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:17:15.155031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 19:17:15.193952       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:17:15.193995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 19:17:15.251209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:17:15.251256       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 19:17:15.265404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:17:15.265431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 19:17:15.369207       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:17:15.369340       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 19:17:18.639851       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:30:16 embed-certs-368536 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:30:16 embed-certs-368536 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:30:16 embed-certs-368536 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:30:19 embed-certs-368536 kubelet[3901]: E0729 19:30:19.538075    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:30:31 embed-certs-368536 kubelet[3901]: E0729 19:30:31.537032    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:30:45 embed-certs-368536 kubelet[3901]: E0729 19:30:45.536690    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:30:58 embed-certs-368536 kubelet[3901]: E0729 19:30:58.537553    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:31:13 embed-certs-368536 kubelet[3901]: E0729 19:31:13.536042    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:31:16 embed-certs-368536 kubelet[3901]: E0729 19:31:16.551186    3901 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:31:16 embed-certs-368536 kubelet[3901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:31:16 embed-certs-368536 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:31:16 embed-certs-368536 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:31:16 embed-certs-368536 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:31:25 embed-certs-368536 kubelet[3901]: E0729 19:31:25.536964    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:31:40 embed-certs-368536 kubelet[3901]: E0729 19:31:40.537279    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:31:54 embed-certs-368536 kubelet[3901]: E0729 19:31:54.537061    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:32:06 embed-certs-368536 kubelet[3901]: E0729 19:32:06.539490    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:32:16 embed-certs-368536 kubelet[3901]: E0729 19:32:16.549755    3901 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:32:16 embed-certs-368536 kubelet[3901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:32:16 embed-certs-368536 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:32:16 embed-certs-368536 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:32:16 embed-certs-368536 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:32:21 embed-certs-368536 kubelet[3901]: E0729 19:32:21.536561    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:32:36 embed-certs-368536 kubelet[3901]: E0729 19:32:36.536216    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	Jul 29 19:32:49 embed-certs-368536 kubelet[3901]: E0729 19:32:49.536559    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-9z4tp" podUID="1382116a-be81-46cb-92b8-ae335164f846"
	
	
	==> storage-provisioner [52d51ed1bbac5c274a39024d0bad3a9452d6441b7ecdcb63f8092a7e50ed3210] <==
	I0729 19:17:31.770076       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:17:31.791523       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:17:31.791930       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:17:31.810673       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:17:31.811833       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-368536_220d82ce-99df-41e3-9f88-758388c9244a!
	I0729 19:17:31.811615       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acd7b0d5-1a16-4d6d-8e6a-624c5d75b549", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-368536_220d82ce-99df-41e3-9f88-758388c9244a became leader
	I0729 19:17:31.912831       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-368536_220d82ce-99df-41e3-9f88-758388c9244a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-368536 -n embed-certs-368536
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-368536 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-9z4tp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-368536 describe pod metrics-server-569cc877fc-9z4tp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-368536 describe pod metrics-server-569cc877fc-9z4tp: exit status 1 (58.495051ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-9z4tp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-368536 describe pod metrics-server-569cc877fc-9z4tp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (374.61s)

                                                
                                    

Test pass (249/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.52
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 4.48
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 5.25
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
31 TestOffline 101.57
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 139.03
40 TestAddons/serial/GCPAuth/Namespaces 2.9
42 TestAddons/parallel/Registry 15.57
44 TestAddons/parallel/InspektorGadget 10.76
46 TestAddons/parallel/HelmTiller 11.49
48 TestAddons/parallel/CSI 63.52
49 TestAddons/parallel/Headlamp 13.81
50 TestAddons/parallel/CloudSpanner 6.72
51 TestAddons/parallel/LocalPath 9.12
52 TestAddons/parallel/NvidiaDevicePlugin 5.55
53 TestAddons/parallel/Yakd 10.93
55 TestCertOptions 41.64
58 TestForceSystemdFlag 59.38
59 TestForceSystemdEnv 69.38
61 TestKVMDriverInstallOrUpdate 1.22
65 TestErrorSpam/setup 39.24
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.7
68 TestErrorSpam/pause 1.54
69 TestErrorSpam/unpause 1.52
70 TestErrorSpam/stop 4.98
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 92.31
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 38.08
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.22
82 TestFunctional/serial/CacheCmd/cache/add_local 0.99
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
90 TestFunctional/serial/ExtraConfig 38.32
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.42
93 TestFunctional/serial/LogsFileCmd 1.45
94 TestFunctional/serial/InvalidService 4.92
96 TestFunctional/parallel/ConfigCmd 0.34
97 TestFunctional/parallel/DashboardCmd 12.19
98 TestFunctional/parallel/DryRun 0.29
99 TestFunctional/parallel/InternationalLanguage 0.16
100 TestFunctional/parallel/StatusCmd 1.23
104 TestFunctional/parallel/ServiceCmdConnect 10.56
105 TestFunctional/parallel/AddonsCmd 0.13
106 TestFunctional/parallel/PersistentVolumeClaim 41.8
108 TestFunctional/parallel/SSHCmd 0.44
109 TestFunctional/parallel/CpCmd 1.39
110 TestFunctional/parallel/MySQL 27.57
111 TestFunctional/parallel/FileSync 0.41
112 TestFunctional/parallel/CertSync 1.64
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
120 TestFunctional/parallel/License 0.18
121 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
122 TestFunctional/parallel/Version/short 0.05
123 TestFunctional/parallel/Version/components 0.93
124 TestFunctional/parallel/ImageCommands/ImageListShort 2.24
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
127 TestFunctional/parallel/ImageCommands/ImageListYaml 1.2
128 TestFunctional/parallel/ImageCommands/ImageBuild 8.62
129 TestFunctional/parallel/ImageCommands/Setup 0.45
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.38
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.01
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.94
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.86
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
147 TestFunctional/parallel/ProfileCmd/profile_list 0.26
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
149 TestFunctional/parallel/MountCmd/any-port 13.7
150 TestFunctional/parallel/ServiceCmd/List 0.29
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
153 TestFunctional/parallel/ServiceCmd/Format 0.41
154 TestFunctional/parallel/ServiceCmd/URL 0.4
155 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
156 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
157 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
158 TestFunctional/parallel/MountCmd/specific-port 2.03
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.36
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 206.02
167 TestMultiControlPlane/serial/DeployApp 5.33
168 TestMultiControlPlane/serial/PingHostFromPods 1.17
169 TestMultiControlPlane/serial/AddWorkerNode 54.75
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.49
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.45
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.37
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.16
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
181 TestMultiControlPlane/serial/RestartCluster 323.16
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
183 TestMultiControlPlane/serial/AddSecondaryNode 76.84
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
188 TestJSONOutput/start/Command 96
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.71
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.61
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.4
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 85.58
220 TestMountStart/serial/StartWithMountFirst 25.46
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 25.26
223 TestMountStart/serial/VerifyMountSecond 0.36
224 TestMountStart/serial/DeleteFirst 0.55
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 22.15
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 120.47
232 TestMultiNode/serial/DeployApp2Nodes 4.56
233 TestMultiNode/serial/PingHostFrom2Pods 0.77
234 TestMultiNode/serial/AddNode 48.92
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 6.96
238 TestMultiNode/serial/StopNode 2.14
239 TestMultiNode/serial/StartAfterStop 36.74
241 TestMultiNode/serial/DeleteNode 2.29
243 TestMultiNode/serial/RestartMultiNode 184.72
244 TestMultiNode/serial/ValidateNameConflict 44.85
251 TestScheduledStopUnix 110.83
255 TestRunningBinaryUpgrade 152.47
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
264 TestNoKubernetes/serial/StartWithK8s 92.84
269 TestNetworkPlugins/group/false 2.82
273 TestStoppedBinaryUpgrade/Setup 0.42
274 TestStoppedBinaryUpgrade/Upgrade 149.92
275 TestNoKubernetes/serial/StartWithStopK8s 37.43
276 TestNoKubernetes/serial/Start 47.17
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
278 TestNoKubernetes/serial/ProfileList 6.66
279 TestNoKubernetes/serial/Stop 1.3
280 TestNoKubernetes/serial/StartNoArgs 31.36
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
290 TestPause/serial/Start 61.03
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
293 TestNetworkPlugins/group/auto/Start 61.96
294 TestNetworkPlugins/group/kindnet/Start 93.69
295 TestNetworkPlugins/group/auto/KubeletFlags 0.2
296 TestNetworkPlugins/group/auto/NetCatPod 11.22
297 TestNetworkPlugins/group/auto/DNS 0.19
298 TestNetworkPlugins/group/auto/Localhost 0.16
299 TestNetworkPlugins/group/auto/HairPin 0.14
300 TestNetworkPlugins/group/custom-flannel/Start 85.18
301 TestNetworkPlugins/group/enable-default-cni/Start 80.36
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
304 TestNetworkPlugins/group/kindnet/NetCatPod 14.23
305 TestNetworkPlugins/group/kindnet/DNS 0.15
306 TestNetworkPlugins/group/kindnet/Localhost 0.14
307 TestNetworkPlugins/group/kindnet/HairPin 0.13
308 TestNetworkPlugins/group/flannel/Start 80.77
309 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 18.24
311 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
312 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.24
313 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
314 TestNetworkPlugins/group/custom-flannel/DNS 0.17
315 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
316 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
317 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
319 TestNetworkPlugins/group/bridge/Start 99.05
320 TestNetworkPlugins/group/calico/Start 108.87
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
323 TestNetworkPlugins/group/flannel/NetCatPod 12.2
324 TestNetworkPlugins/group/flannel/DNS 0.16
325 TestNetworkPlugins/group/flannel/Localhost 0.12
326 TestNetworkPlugins/group/flannel/HairPin 0.12
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
330 TestNetworkPlugins/group/bridge/NetCatPod 10.23
331 TestNetworkPlugins/group/bridge/DNS 0.22
332 TestNetworkPlugins/group/bridge/Localhost 0.14
333 TestNetworkPlugins/group/bridge/HairPin 0.14
334 TestNetworkPlugins/group/calico/ControllerPod 6.01
335 TestNetworkPlugins/group/calico/KubeletFlags 0.21
336 TestNetworkPlugins/group/calico/NetCatPod 12.3
338 TestStartStop/group/no-preload/serial/FirstStart 72.65
339 TestNetworkPlugins/group/calico/DNS 0.19
340 TestNetworkPlugins/group/calico/Localhost 0.17
341 TestNetworkPlugins/group/calico/HairPin 0.15
343 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 100.67
344 TestStartStop/group/no-preload/serial/DeployApp 9.3
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
353 TestStartStop/group/no-preload/serial/SecondStart 654.73
355 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 568.93
356 TestStartStop/group/old-k8s-version/serial/Stop 1.28
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
360 TestStartStop/group/newest-cni/serial/FirstStart 49.63
361 TestStartStop/group/newest-cni/serial/DeployApp 0
362 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
363 TestStartStop/group/newest-cni/serial/Stop 2.37
364 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
365 TestStartStop/group/newest-cni/serial/SecondStart 35.3
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
369 TestStartStop/group/newest-cni/serial/Pause 2.47
371 TestStartStop/group/embed-certs/serial/FirstStart 98.34
372 TestStartStop/group/embed-certs/serial/DeployApp 9.28
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
378 TestStartStop/group/embed-certs/serial/SecondStart 625.33
x
+
TestDownloadOnly/v1.20.0/json-events (9.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-559567 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-559567 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.51957143s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-559567
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-559567: exit status 85 (54.719886ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-559567 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |          |
	|         | -p download-only-559567        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:33:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:33:10.103013   95294 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:33:10.103127   95294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:33:10.103136   95294 out.go:304] Setting ErrFile to fd 2...
	I0729 17:33:10.103140   95294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:33:10.103315   95294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	W0729 17:33:10.103438   95294 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19339-88081/.minikube/config/config.json: open /home/jenkins/minikube-integration/19339-88081/.minikube/config/config.json: no such file or directory
	I0729 17:33:10.104072   95294 out.go:298] Setting JSON to true
	I0729 17:33:10.104949   95294 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8110,"bootTime":1722266280,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:33:10.105008   95294 start.go:139] virtualization: kvm guest
	I0729 17:33:10.107261   95294 out.go:97] [download-only-559567] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0729 17:33:10.107362   95294 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 17:33:10.107440   95294 notify.go:220] Checking for updates...
	I0729 17:33:10.108572   95294 out.go:169] MINIKUBE_LOCATION=19339
	I0729 17:33:10.109849   95294 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:33:10.110979   95294 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:33:10.112117   95294 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:33:10.113182   95294 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 17:33:10.115076   95294 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 17:33:10.115295   95294 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:33:10.150655   95294 out.go:97] Using the kvm2 driver based on user configuration
	I0729 17:33:10.150684   95294 start.go:297] selected driver: kvm2
	I0729 17:33:10.150692   95294 start.go:901] validating driver "kvm2" against <nil>
	I0729 17:33:10.151028   95294 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:33:10.151129   95294 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:33:10.165955   95294 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:33:10.166012   95294 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:33:10.166468   95294 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 17:33:10.166618   95294 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 17:33:10.166650   95294 cni.go:84] Creating CNI manager for ""
	I0729 17:33:10.166658   95294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 17:33:10.166669   95294 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:33:10.166729   95294 start.go:340] cluster config:
	{Name:download-only-559567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-559567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:33:10.166893   95294 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:33:10.168426   95294 out.go:97] Downloading VM boot image ...
	I0729 17:33:10.168458   95294 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19339-88081/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0729 17:33:14.377190   95294 out.go:97] Starting "download-only-559567" primary control-plane node in "download-only-559567" cluster
	I0729 17:33:14.377220   95294 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 17:33:14.399599   95294 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 17:33:14.399644   95294 cache.go:56] Caching tarball of preloaded images
	I0729 17:33:14.399796   95294 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 17:33:14.401284   95294 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 17:33:14.401305   95294 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 17:33:14.424179   95294 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-559567 host does not exist
	  To start a cluster, run: "minikube start -p download-only-559567"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-559567
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (4.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-664821 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-664821 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.480680441s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (4.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-664821
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-664821: exit status 85 (56.135761ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-559567 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | -p download-only-559567        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| delete  | -p download-only-559567        | download-only-559567 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| start   | -o=json --download-only        | download-only-664821 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | -p download-only-664821        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:33:19
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:33:19.926257   95511 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:33:19.926492   95511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:33:19.926501   95511 out.go:304] Setting ErrFile to fd 2...
	I0729 17:33:19.926506   95511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:33:19.926695   95511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:33:19.927208   95511 out.go:298] Setting JSON to true
	I0729 17:33:19.927987   95511 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8120,"bootTime":1722266280,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:33:19.928043   95511 start.go:139] virtualization: kvm guest
	I0729 17:33:19.930123   95511 out.go:97] [download-only-664821] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:33:19.930285   95511 notify.go:220] Checking for updates...
	I0729 17:33:19.931521   95511 out.go:169] MINIKUBE_LOCATION=19339
	I0729 17:33:19.932780   95511 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:33:19.934041   95511 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:33:19.935100   95511 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:33:19.936075   95511 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-664821 host does not exist
	  To start a cluster, run: "minikube start -p download-only-664821"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-664821
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (5.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-330185 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-330185 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.247775783s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (5.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-330185
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-330185: exit status 85 (58.446101ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-559567 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | -p download-only-559567             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| delete  | -p download-only-559567             | download-only-559567 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| start   | -o=json --download-only             | download-only-664821 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | -p download-only-664821             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| delete  | -p download-only-664821             | download-only-664821 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC | 29 Jul 24 17:33 UTC |
	| start   | -o=json --download-only             | download-only-330185 | jenkins | v1.33.1 | 29 Jul 24 17:33 UTC |                     |
	|         | -p download-only-330185             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 17:33:24
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 17:33:24.714553   95702 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:33:24.714659   95702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:33:24.714669   95702 out.go:304] Setting ErrFile to fd 2...
	I0729 17:33:24.714673   95702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:33:24.714832   95702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:33:24.715346   95702 out.go:298] Setting JSON to true
	I0729 17:33:24.716260   95702 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8125,"bootTime":1722266280,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:33:24.716318   95702 start.go:139] virtualization: kvm guest
	I0729 17:33:24.718240   95702 out.go:97] [download-only-330185] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:33:24.718417   95702 notify.go:220] Checking for updates...
	I0729 17:33:24.719713   95702 out.go:169] MINIKUBE_LOCATION=19339
	I0729 17:33:24.721043   95702 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:33:24.722456   95702 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:33:24.723642   95702 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:33:24.724727   95702 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 17:33:24.726787   95702 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 17:33:24.727037   95702 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:33:24.757644   95702 out.go:97] Using the kvm2 driver based on user configuration
	I0729 17:33:24.757678   95702 start.go:297] selected driver: kvm2
	I0729 17:33:24.757684   95702 start.go:901] validating driver "kvm2" against <nil>
	I0729 17:33:24.758011   95702 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:33:24.758087   95702 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19339-88081/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 17:33:24.773274   95702 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 17:33:24.773316   95702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 17:33:24.773812   95702 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 17:33:24.773981   95702 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 17:33:24.774009   95702 cni.go:84] Creating CNI manager for ""
	I0729 17:33:24.774019   95702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 17:33:24.774036   95702 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 17:33:24.774097   95702 start.go:340] cluster config:
	{Name:download-only-330185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-330185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:33:24.774210   95702 iso.go:125] acquiring lock: {Name:mkff602c753bfa3e0d79d57ee8dc490f2e8d0298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 17:33:24.775620   95702 out.go:97] Starting "download-only-330185" primary control-plane node in "download-only-330185" cluster
	I0729 17:33:24.775638   95702 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 17:33:24.802601   95702 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 17:33:24.802638   95702 cache.go:56] Caching tarball of preloaded images
	I0729 17:33:24.802798   95702 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 17:33:24.804768   95702 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 17:33:24.804788   95702 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 17:33:24.830237   95702 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 17:33:27.405896   95702 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 17:33:27.405989   95702 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19339-88081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 17:33:28.124803   95702 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 17:33:28.125186   95702 profile.go:143] Saving config to /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/download-only-330185/config.json ...
	I0729 17:33:28.125216   95702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/download-only-330185/config.json: {Name:mkf7cb086747f6f121a2ccb73eb9acc4dbf032af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 17:33:28.125373   95702 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 17:33:28.125505   95702 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19339-88081/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-330185 host does not exist
	  To start a cluster, run: "minikube start -p download-only-330185"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-330185
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-423519 --alsologtostderr --binary-mirror http://127.0.0.1:45115 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-423519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-423519
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (101.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-778169 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-778169 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.373968078s)
helpers_test.go:175: Cleaning up "offline-crio-778169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-778169
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-778169: (1.193247505s)
--- PASS: TestOffline (101.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-145541
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-145541: exit status 85 (50.698838ms)

                                                
                                                
-- stdout --
	* Profile "addons-145541" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-145541"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-145541
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-145541: exit status 85 (49.142406ms)

                                                
                                                
-- stdout --
	* Profile "addons-145541" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-145541"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (139.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-145541 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-145541 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m19.03201431s)
--- PASS: TestAddons/Setup (139.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.9s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-145541 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-145541 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-145541 get secret gcp-auth -n new-namespace: exit status 1 (68.637886ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-145541 logs -l app=gcp-auth -n gcp-auth
addons_test.go:670: (dbg) Run:  kubectl --context addons-145541 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.90s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.365448ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-9qnhg" [ca8784f3-5a3c-4e49-b99f-0f6a32e7c737] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004597596s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dgtch" [621f0921-7ec4-4046-b693-3dd1b6619b44] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005341976s
addons_test.go:342: (dbg) Run:  kubectl --context addons-145541 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-145541 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-145541 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.487716625s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 ip
2024/07/29 17:36:23 [DEBUG] GET http://192.168.39.242:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mbvnt" [a4e0590b-21fd-47a4-b966-e233f95ad067] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005032858s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-145541
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-145541: (5.74913647s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.49s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.35556ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-d7vqp" [01075b35-8252-425f-8fc5-05b87bfaccdb] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004517539s
addons_test.go:475: (dbg) Run:  kubectl --context addons-145541 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-145541 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.560998713s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.49s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.285729ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-145541 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-145541 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b0c7e202-58d1-4857-a7f3-533b3bb4b820] Pending
helpers_test.go:344: "task-pv-pod" [b0c7e202-58d1-4857-a7f3-533b3bb4b820] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b0c7e202-58d1-4857-a7f3-533b3bb4b820] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004493538s
addons_test.go:590: (dbg) Run:  kubectl --context addons-145541 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-145541 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-145541 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-145541 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-145541 delete pod task-pv-pod: (1.202117543s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-145541 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-145541 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-145541 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9eee6bc5-8d99-463e-916d-cdeeedc93002] Pending
helpers_test.go:344: "task-pv-pod-restore" [9eee6bc5-8d99-463e-916d-cdeeedc93002] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9eee6bc5-8d99-463e-916d-cdeeedc93002] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004068065s
addons_test.go:632: (dbg) Run:  kubectl --context addons-145541 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-145541 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-145541 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-145541 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.716304777s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.52s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-145541 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-145541 --alsologtostderr -v=1: (1.309282135s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-nvghv" [011cd85f-b07b-46e4-b4bd-1f47b7dc24df] Pending
helpers_test.go:344: "headlamp-7867546754-nvghv" [011cd85f-b07b-46e4-b4bd-1f47b7dc24df] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-nvghv" [011cd85f-b07b-46e4-b4bd-1f47b7dc24df] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004478278s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (13.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-7rq2s" [19e21138-2e14-4f35-b2db-97b4451eb2a8] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.006712676s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-145541
--- PASS: TestAddons/parallel/CloudSpanner (6.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-145541 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-145541 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-145541 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9a850c5e-42d0-4511-80d7-ebd5a445cf68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9a850c5e-42d0-4511-80d7-ebd5a445cf68] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9a850c5e-42d0-4511-80d7-ebd5a445cf68] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003832262s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-145541 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 ssh "cat /opt/local-path-provisioner/pvc-1e5ae59b-219f-4d33-8e28-ea4906311031_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-145541 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-145541 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4gjrg" [3288c0c8-9742-44dc-985f-33455a462b79] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004669588s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-145541
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-wxn9s" [a289d262-5c74-4f53-af65-48b2a47b3af6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005247549s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-145541 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-145541 addons disable yakd --alsologtostderr -v=1: (5.92520751s)
--- PASS: TestAddons/parallel/Yakd (10.93s)

                                                
                                    
x
+
TestCertOptions (41.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-899685 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-899685 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (40.345078391s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-899685 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-899685 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-899685 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-899685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-899685
--- PASS: TestCertOptions (41.64s)

                                                
                                    
x
+
TestForceSystemdFlag (59.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-729652 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-729652 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.307838932s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-729652 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-729652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-729652
--- PASS: TestForceSystemdFlag (59.38s)

                                                
                                    
x
+
TestForceSystemdEnv (69.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-801126 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-801126 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.526624317s)
helpers_test.go:175: Cleaning up "force-systemd-env-801126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-801126
--- PASS: TestForceSystemdEnv (69.38s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.22s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.22s)

                                                
                                    
x
+
TestErrorSpam/setup (39.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-992136 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-992136 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-992136 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-992136 --driver=kvm2  --container-runtime=crio: (39.235349965s)
--- PASS: TestErrorSpam/setup (39.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (4.98s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 stop: (1.491860994s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 stop: (1.963357458s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-992136 --log_dir /tmp/nospam-992136 stop: (1.52908671s)
--- PASS: TestErrorSpam/stop (4.98s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19339-88081/.minikube/files/etc/test/nested/copy/95282/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (92.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-810151 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0729 17:45:53.333940   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:45:53.340310   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:45:53.350551   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:45:53.370823   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:45:53.411157   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:45:53.491469   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:45:53.651798   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:45:53.972395   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:45:54.613378   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:45:55.893882   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:45:58.455685   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:46:03.576304   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:46:13.817282   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:46:34.298104   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-810151 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m32.308041966s)
--- PASS: TestFunctional/serial/StartWithProxy (92.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-810151 --alsologtostderr -v=8
E0729 17:47:15.258825   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-810151 --alsologtostderr -v=8: (38.077157477s)
functional_test.go:659: soft start took 38.077853567s for "functional-810151" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-810151 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-810151 cache add registry.k8s.io/pause:3.1: (1.044144122s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-810151 cache add registry.k8s.io/pause:3.3: (1.11828953s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-810151 cache add registry.k8s.io/pause:latest: (1.055193509s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-810151 /tmp/TestFunctionalserialCacheCmdcacheadd_local2549294055/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 cache add minikube-local-cache-test:functional-810151
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 cache delete minikube-local-cache-test:functional-810151
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-810151
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-810151 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.202858ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 kubectl -- --context functional-810151 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-810151 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-810151 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-810151 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.323463183s)
functional_test.go:757: restart took 38.323602715s for "functional-810151" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-810151 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-810151 logs: (1.41800233s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 logs --file /tmp/TestFunctionalserialLogsFileCmd167349429/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-810151 logs --file /tmp/TestFunctionalserialLogsFileCmd167349429/001/logs.txt: (1.453587797s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.92s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-810151 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-810151
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-810151: exit status 115 (268.73269ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.176:30608 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-810151 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-810151 delete -f testdata/invalidsvc.yaml: (1.460583246s)
--- PASS: TestFunctional/serial/InvalidService (4.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-810151 config get cpus: exit status 14 (51.67995ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-810151 config get cpus: exit status 14 (45.245145ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-810151 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-810151 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 104674: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-810151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-810151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (145.199795ms)

                                                
                                                
-- stdout --
	* [functional-810151] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:48:30.980300  104336 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:48:30.980551  104336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:48:30.980562  104336 out.go:304] Setting ErrFile to fd 2...
	I0729 17:48:30.980580  104336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:48:30.980777  104336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:48:30.981451  104336 out.go:298] Setting JSON to false
	I0729 17:48:30.982444  104336 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9031,"bootTime":1722266280,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:48:30.982505  104336 start.go:139] virtualization: kvm guest
	I0729 17:48:30.984413  104336 out.go:177] * [functional-810151] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 17:48:30.985556  104336 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 17:48:30.985571  104336 notify.go:220] Checking for updates...
	I0729 17:48:30.987754  104336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:48:30.988934  104336 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:48:30.989994  104336 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:48:30.991240  104336 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:48:30.992487  104336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:48:30.994357  104336 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:48:30.994925  104336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:48:30.994981  104336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:48:31.010643  104336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41661
	I0729 17:48:31.011102  104336 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:48:31.011689  104336 main.go:141] libmachine: Using API Version  1
	I0729 17:48:31.011714  104336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:48:31.012107  104336 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:48:31.012290  104336 main.go:141] libmachine: (functional-810151) Calling .DriverName
	I0729 17:48:31.012583  104336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:48:31.013013  104336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:48:31.013059  104336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:48:31.028273  104336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0729 17:48:31.028900  104336 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:48:31.029410  104336 main.go:141] libmachine: Using API Version  1
	I0729 17:48:31.029444  104336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:48:31.029842  104336 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:48:31.030048  104336 main.go:141] libmachine: (functional-810151) Calling .DriverName
	I0729 17:48:31.066587  104336 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 17:48:31.068003  104336 start.go:297] selected driver: kvm2
	I0729 17:48:31.068019  104336 start.go:901] validating driver "kvm2" against &{Name:functional-810151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-810151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:48:31.068160  104336 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:48:31.070278  104336 out.go:177] 
	W0729 17:48:31.071505  104336 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 17:48:31.072669  104336 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-810151 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-810151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-810151 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (156.956303ms)

                                                
                                                
-- stdout --
	* [functional-810151] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 17:48:30.831388  104287 out.go:291] Setting OutFile to fd 1 ...
	I0729 17:48:30.831531  104287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:48:30.831544  104287 out.go:304] Setting ErrFile to fd 2...
	I0729 17:48:30.831551  104287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 17:48:30.832022  104287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 17:48:30.832742  104287 out.go:298] Setting JSON to false
	I0729 17:48:30.834181  104287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9031,"bootTime":1722266280,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 17:48:30.834269  104287 start.go:139] virtualization: kvm guest
	I0729 17:48:30.836694  104287 out.go:177] * [functional-810151] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 17:48:30.838183  104287 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 17:48:30.838274  104287 notify.go:220] Checking for updates...
	I0729 17:48:30.840721  104287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 17:48:30.841982  104287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 17:48:30.843201  104287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 17:48:30.844498  104287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 17:48:30.845795  104287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 17:48:30.847501  104287 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 17:48:30.848091  104287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:48:30.848175  104287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:48:30.865484  104287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39895
	I0729 17:48:30.866008  104287 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:48:30.866623  104287 main.go:141] libmachine: Using API Version  1
	I0729 17:48:30.866648  104287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:48:30.867040  104287 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:48:30.867252  104287 main.go:141] libmachine: (functional-810151) Calling .DriverName
	I0729 17:48:30.867552  104287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 17:48:30.868009  104287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 17:48:30.868055  104287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 17:48:30.883150  104287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35165
	I0729 17:48:30.883679  104287 main.go:141] libmachine: () Calling .GetVersion
	I0729 17:48:30.884172  104287 main.go:141] libmachine: Using API Version  1
	I0729 17:48:30.884195  104287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 17:48:30.884616  104287 main.go:141] libmachine: () Calling .GetMachineName
	I0729 17:48:30.884880  104287 main.go:141] libmachine: (functional-810151) Calling .DriverName
	I0729 17:48:30.918977  104287 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 17:48:30.920270  104287 start.go:297] selected driver: kvm2
	I0729 17:48:30.920291  104287 start.go:901] validating driver "kvm2" against &{Name:functional-810151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-810151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 17:48:30.920448  104287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 17:48:30.922892  104287 out.go:177] 
	W0729 17:48:30.924182  104287 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 17:48:30.925464  104287 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-810151 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-810151 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-7kmkv" [ef81782e-beb7-4cdf-a3e0-c47c36fde8a0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-7kmkv" [ef81782e-beb7-4cdf-a3e0-c47c36fde8a0] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004157022s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.176:32467
functional_test.go:1671: http://192.168.39.176:32467: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-7kmkv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.176:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.176:32467
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7b0aecdc-474f-43ac-bc7e-a101af5f20b1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007692254s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-810151 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-810151 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-810151 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-810151 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-810151 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [464516fb-1c42-44cb-bb16-2519b05e834a] Pending
helpers_test.go:344: "sp-pod" [464516fb-1c42-44cb-bb16-2519b05e834a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [464516fb-1c42-44cb-bb16-2519b05e834a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004204452s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-810151 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-810151 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-810151 delete -f testdata/storage-provisioner/pod.yaml: (2.798779628s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-810151 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2b038f46-d4c8-4813-a113-959e453c0516] Pending
helpers_test.go:344: "sp-pod" [2b038f46-d4c8-4813-a113-959e453c0516] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2b038f46-d4c8-4813-a113-959e453c0516] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.006832474s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-810151 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh -n functional-810151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 cp functional-810151:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3231361826/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh -n functional-810151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh -n functional-810151 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-810151 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-jqt9p" [0acc7e12-3bbf-421d-9091-16d2383f42e0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-jqt9p" [0acc7e12-3bbf-421d-9091-16d2383f42e0] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.003501545s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-810151 exec mysql-64454c8b5c-jqt9p -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-810151 exec mysql-64454c8b5c-jqt9p -- mysql -ppassword -e "show databases;": exit status 1 (127.791895ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-810151 exec mysql-64454c8b5c-jqt9p -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-810151 exec mysql-64454c8b5c-jqt9p -- mysql -ppassword -e "show databases;": exit status 1 (123.844957ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-810151 exec mysql-64454c8b5c-jqt9p -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/95282/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo cat /etc/test/nested/copy/95282/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/95282.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo cat /etc/ssl/certs/95282.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/95282.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo cat /usr/share/ca-certificates/95282.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/952822.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo cat /etc/ssl/certs/952822.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/952822.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo cat /usr/share/ca-certificates/952822.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-810151 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-810151 ssh "sudo systemctl is-active docker": exit status 1 (222.01368ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-810151 ssh "sudo systemctl is-active containerd": exit status 1 (224.921096ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-810151 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-810151 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-sxd6q" [e9f3755c-ea0b-4543-9741-5d2911ae56aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-sxd6q" [e9f3755c-ea0b-4543-9741-5d2911ae56aa] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004210937s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-810151 image ls --format short --alsologtostderr: (2.23498836s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-810151 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-810151
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-810151
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-810151 image ls --format short --alsologtostderr:
I0729 17:48:44.437713  105154 out.go:291] Setting OutFile to fd 1 ...
I0729 17:48:44.437865  105154 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:48:44.437874  105154 out.go:304] Setting ErrFile to fd 2...
I0729 17:48:44.437878  105154 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:48:44.438058  105154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
I0729 17:48:44.438705  105154 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:48:44.438860  105154 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:48:44.439431  105154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:48:44.439484  105154 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:48:44.460902  105154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
I0729 17:48:44.461413  105154 main.go:141] libmachine: () Calling .GetVersion
I0729 17:48:44.462064  105154 main.go:141] libmachine: Using API Version  1
I0729 17:48:44.462087  105154 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:48:44.462459  105154 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:48:44.462703  105154 main.go:141] libmachine: (functional-810151) Calling .GetState
I0729 17:48:44.464835  105154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:48:44.464915  105154 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:48:44.484904  105154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44487
I0729 17:48:44.485361  105154 main.go:141] libmachine: () Calling .GetVersion
I0729 17:48:44.485975  105154 main.go:141] libmachine: Using API Version  1
I0729 17:48:44.485998  105154 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:48:44.486472  105154 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:48:44.486708  105154 main.go:141] libmachine: (functional-810151) Calling .DriverName
I0729 17:48:44.486918  105154 ssh_runner.go:195] Run: systemctl --version
I0729 17:48:44.486953  105154 main.go:141] libmachine: (functional-810151) Calling .GetSSHHostname
I0729 17:48:44.490120  105154 main.go:141] libmachine: (functional-810151) DBG | domain functional-810151 has defined MAC address 52:54:00:e6:c8:f2 in network mk-functional-810151
I0729 17:48:44.490371  105154 main.go:141] libmachine: (functional-810151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c8:f2", ip: ""} in network mk-functional-810151: {Iface:virbr1 ExpiryTime:2024-07-29 18:45:29 +0000 UTC Type:0 Mac:52:54:00:e6:c8:f2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-810151 Clientid:01:52:54:00:e6:c8:f2}
I0729 17:48:44.490404  105154 main.go:141] libmachine: (functional-810151) DBG | domain functional-810151 has defined IP address 192.168.39.176 and MAC address 52:54:00:e6:c8:f2 in network mk-functional-810151
I0729 17:48:44.490604  105154 main.go:141] libmachine: (functional-810151) Calling .GetSSHPort
I0729 17:48:44.490779  105154 main.go:141] libmachine: (functional-810151) Calling .GetSSHKeyPath
I0729 17:48:44.490913  105154 main.go:141] libmachine: (functional-810151) Calling .GetSSHUsername
I0729 17:48:44.491034  105154 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/functional-810151/id_rsa Username:docker}
I0729 17:48:44.612351  105154 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 17:48:46.611931  105154 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.999545237s)
I0729 17:48:46.612283  105154 main.go:141] libmachine: Making call to close driver server
I0729 17:48:46.612308  105154 main.go:141] libmachine: (functional-810151) Calling .Close
I0729 17:48:46.612598  105154 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:48:46.612621  105154 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:48:46.612631  105154 main.go:141] libmachine: Making call to close driver server
I0729 17:48:46.612640  105154 main.go:141] libmachine: (functional-810151) Calling .Close
I0729 17:48:46.612848  105154 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:48:46.612868  105154 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-810151 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| localhost/minikube-local-cache-test     | functional-810151  | 786eabb8e6ec0 | 3.33kB |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kicbase/echo-server           | functional-810151  | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-810151 image ls --format table --alsologtostderr:
I0729 17:48:46.968895  105378 out.go:291] Setting OutFile to fd 1 ...
I0729 17:48:46.969212  105378 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:48:46.969225  105378 out.go:304] Setting ErrFile to fd 2...
I0729 17:48:46.969232  105378 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:48:46.969523  105378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
I0729 17:48:46.970341  105378 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:48:46.970494  105378 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:48:46.971048  105378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:48:46.971097  105378 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:48:46.986639  105378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
I0729 17:48:46.987057  105378 main.go:141] libmachine: () Calling .GetVersion
I0729 17:48:46.987693  105378 main.go:141] libmachine: Using API Version  1
I0729 17:48:46.987724  105378 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:48:46.988073  105378 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:48:46.988270  105378 main.go:141] libmachine: (functional-810151) Calling .GetState
I0729 17:48:46.990149  105378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:48:46.990199  105378 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:48:47.005764  105378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
I0729 17:48:47.006222  105378 main.go:141] libmachine: () Calling .GetVersion
I0729 17:48:47.006742  105378 main.go:141] libmachine: Using API Version  1
I0729 17:48:47.006771  105378 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:48:47.007060  105378 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:48:47.007248  105378 main.go:141] libmachine: (functional-810151) Calling .DriverName
I0729 17:48:47.007448  105378 ssh_runner.go:195] Run: systemctl --version
I0729 17:48:47.007475  105378 main.go:141] libmachine: (functional-810151) Calling .GetSSHHostname
I0729 17:48:47.010356  105378 main.go:141] libmachine: (functional-810151) DBG | domain functional-810151 has defined MAC address 52:54:00:e6:c8:f2 in network mk-functional-810151
I0729 17:48:47.010718  105378 main.go:141] libmachine: (functional-810151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c8:f2", ip: ""} in network mk-functional-810151: {Iface:virbr1 ExpiryTime:2024-07-29 18:45:29 +0000 UTC Type:0 Mac:52:54:00:e6:c8:f2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-810151 Clientid:01:52:54:00:e6:c8:f2}
I0729 17:48:47.010744  105378 main.go:141] libmachine: (functional-810151) DBG | domain functional-810151 has defined IP address 192.168.39.176 and MAC address 52:54:00:e6:c8:f2 in network mk-functional-810151
I0729 17:48:47.010895  105378 main.go:141] libmachine: (functional-810151) Calling .GetSSHPort
I0729 17:48:47.011056  105378 main.go:141] libmachine: (functional-810151) Calling .GetSSHKeyPath
I0729 17:48:47.011167  105378 main.go:141] libmachine: (functional-810151) Calling .GetSSHUsername
I0729 17:48:47.011290  105378 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/functional-810151/id_rsa Username:docker}
I0729 17:48:47.129052  105378 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 17:48:47.242333  105378 main.go:141] libmachine: Making call to close driver server
I0729 17:48:47.242355  105378 main.go:141] libmachine: (functional-810151) Calling .Close
I0729 17:48:47.242685  105378 main.go:141] libmachine: (functional-810151) DBG | Closing plugin on server side
I0729 17:48:47.242720  105378 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:48:47.242733  105378 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:48:47.242745  105378 main.go:141] libmachine: Making call to close driver server
I0729 17:48:47.242764  105378 main.go:141] libmachine: (functional-810151) Calling .Close
I0729 17:48:47.242979  105378 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:48:47.242995  105378 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-810151 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273ba
df856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags"
:["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd
2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43f
c8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-810151"],"size":"4943877"},{"id":"786eabb8e6ec099b366d3974c6f24bf915a326a785d953aec8c7e7857cd6b88f","repoDigests":["localhost/minikube-local-cache-test@sha256:a65cfe583e4b9c4958fe3d44126ea31d9880c25bd6925f747c4679110a92fab6"],"repoTags":["localhost/minikube-local-cache-test:functional-810151"],"size":"3330"},{"id":"3861cfc
d7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-810151 image ls --format json --alsologtostderr:
I0729 17:48:46.671677  105316 out.go:291] Setting OutFile to fd 1 ...
I0729 17:48:46.671804  105316 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:48:46.671814  105316 out.go:304] Setting ErrFile to fd 2...
I0729 17:48:46.671819  105316 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:48:46.672016  105316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
I0729 17:48:46.672549  105316 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:48:46.672651  105316 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:48:46.673047  105316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:48:46.673089  105316 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:48:46.687497  105316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42153
I0729 17:48:46.687992  105316 main.go:141] libmachine: () Calling .GetVersion
I0729 17:48:46.688567  105316 main.go:141] libmachine: Using API Version  1
I0729 17:48:46.688618  105316 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:48:46.689054  105316 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:48:46.689229  105316 main.go:141] libmachine: (functional-810151) Calling .GetState
I0729 17:48:46.691471  105316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:48:46.691514  105316 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:48:46.706396  105316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34909
I0729 17:48:46.706800  105316 main.go:141] libmachine: () Calling .GetVersion
I0729 17:48:46.707205  105316 main.go:141] libmachine: Using API Version  1
I0729 17:48:46.707226  105316 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:48:46.707559  105316 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:48:46.707720  105316 main.go:141] libmachine: (functional-810151) Calling .DriverName
I0729 17:48:46.707883  105316 ssh_runner.go:195] Run: systemctl --version
I0729 17:48:46.707911  105316 main.go:141] libmachine: (functional-810151) Calling .GetSSHHostname
I0729 17:48:46.711010  105316 main.go:141] libmachine: (functional-810151) DBG | domain functional-810151 has defined MAC address 52:54:00:e6:c8:f2 in network mk-functional-810151
I0729 17:48:46.711445  105316 main.go:141] libmachine: (functional-810151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c8:f2", ip: ""} in network mk-functional-810151: {Iface:virbr1 ExpiryTime:2024-07-29 18:45:29 +0000 UTC Type:0 Mac:52:54:00:e6:c8:f2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-810151 Clientid:01:52:54:00:e6:c8:f2}
I0729 17:48:46.711514  105316 main.go:141] libmachine: (functional-810151) DBG | domain functional-810151 has defined IP address 192.168.39.176 and MAC address 52:54:00:e6:c8:f2 in network mk-functional-810151
I0729 17:48:46.711600  105316 main.go:141] libmachine: (functional-810151) Calling .GetSSHPort
I0729 17:48:46.711763  105316 main.go:141] libmachine: (functional-810151) Calling .GetSSHKeyPath
I0729 17:48:46.711888  105316 main.go:141] libmachine: (functional-810151) Calling .GetSSHUsername
I0729 17:48:46.712005  105316 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/functional-810151/id_rsa Username:docker}
I0729 17:48:46.838971  105316 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 17:48:46.908571  105316 main.go:141] libmachine: Making call to close driver server
I0729 17:48:46.908588  105316 main.go:141] libmachine: (functional-810151) Calling .Close
I0729 17:48:46.908955  105316 main.go:141] libmachine: (functional-810151) DBG | Closing plugin on server side
I0729 17:48:46.909084  105316 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:48:46.909116  105316 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:48:46.909130  105316 main.go:141] libmachine: Making call to close driver server
I0729 17:48:46.909137  105316 main.go:141] libmachine: (functional-810151) Calling .Close
I0729 17:48:46.909375  105316 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:48:46.909393  105316 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-810151 image ls --format yaml --alsologtostderr: (1.200208831s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-810151 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-810151
size: "4943877"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 786eabb8e6ec099b366d3974c6f24bf915a326a785d953aec8c7e7857cd6b88f
repoDigests:
- localhost/minikube-local-cache-test@sha256:a65cfe583e4b9c4958fe3d44126ea31d9880c25bd6925f747c4679110a92fab6
repoTags:
- localhost/minikube-local-cache-test:functional-810151
size: "3330"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-810151 image ls --format yaml --alsologtostderr:
I0729 17:48:45.464044  105292 out.go:291] Setting OutFile to fd 1 ...
I0729 17:48:45.464184  105292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:48:45.464198  105292 out.go:304] Setting ErrFile to fd 2...
I0729 17:48:45.464204  105292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:48:45.464522  105292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
I0729 17:48:45.465327  105292 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:48:45.465490  105292 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:48:45.466061  105292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:48:45.466118  105292 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:48:45.481068  105292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37699
I0729 17:48:45.481502  105292 main.go:141] libmachine: () Calling .GetVersion
I0729 17:48:45.481976  105292 main.go:141] libmachine: Using API Version  1
I0729 17:48:45.482002  105292 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:48:45.482331  105292 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:48:45.482573  105292 main.go:141] libmachine: (functional-810151) Calling .GetState
I0729 17:48:45.484429  105292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:48:45.484477  105292 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:48:45.498786  105292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
I0729 17:48:45.499280  105292 main.go:141] libmachine: () Calling .GetVersion
I0729 17:48:45.499828  105292 main.go:141] libmachine: Using API Version  1
I0729 17:48:45.499851  105292 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:48:45.500188  105292 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:48:45.500393  105292 main.go:141] libmachine: (functional-810151) Calling .DriverName
I0729 17:48:45.500611  105292 ssh_runner.go:195] Run: systemctl --version
I0729 17:48:45.500637  105292 main.go:141] libmachine: (functional-810151) Calling .GetSSHHostname
I0729 17:48:45.503283  105292 main.go:141] libmachine: (functional-810151) DBG | domain functional-810151 has defined MAC address 52:54:00:e6:c8:f2 in network mk-functional-810151
I0729 17:48:45.503682  105292 main.go:141] libmachine: (functional-810151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c8:f2", ip: ""} in network mk-functional-810151: {Iface:virbr1 ExpiryTime:2024-07-29 18:45:29 +0000 UTC Type:0 Mac:52:54:00:e6:c8:f2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-810151 Clientid:01:52:54:00:e6:c8:f2}
I0729 17:48:45.503714  105292 main.go:141] libmachine: (functional-810151) DBG | domain functional-810151 has defined IP address 192.168.39.176 and MAC address 52:54:00:e6:c8:f2 in network mk-functional-810151
I0729 17:48:45.503846  105292 main.go:141] libmachine: (functional-810151) Calling .GetSSHPort
I0729 17:48:45.504032  105292 main.go:141] libmachine: (functional-810151) Calling .GetSSHKeyPath
I0729 17:48:45.504224  105292 main.go:141] libmachine: (functional-810151) Calling .GetSSHUsername
I0729 17:48:45.504345  105292 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/functional-810151/id_rsa Username:docker}
I0729 17:48:45.625312  105292 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 17:48:46.613237  105292 main.go:141] libmachine: Making call to close driver server
I0729 17:48:46.613256  105292 main.go:141] libmachine: (functional-810151) Calling .Close
I0729 17:48:46.613524  105292 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:48:46.613540  105292 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:48:46.613553  105292 main.go:141] libmachine: Making call to close driver server
I0729 17:48:46.613560  105292 main.go:141] libmachine: (functional-810151) Calling .Close
I0729 17:48:46.613808  105292 main.go:141] libmachine: (functional-810151) DBG | Closing plugin on server side
I0729 17:48:46.613811  105292 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:48:46.613837  105292 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-810151 ssh pgrep buildkitd: exit status 1 (265.418327ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image build -t localhost/my-image:functional-810151 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-810151 image build -t localhost/my-image:functional-810151 testdata/build --alsologtostderr: (8.073375254s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-810151 image build -t localhost/my-image:functional-810151 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> abd50c0d114
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-810151
--> 17b3d32998d
Successfully tagged localhost/my-image:functional-810151
17b3d32998da9fac9083ccc52489dfca36dd99b88de12965f6701e3e389854e5
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-810151 image build -t localhost/my-image:functional-810151 testdata/build --alsologtostderr:
I0729 17:48:46.931123  105367 out.go:291] Setting OutFile to fd 1 ...
I0729 17:48:46.931239  105367 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:48:46.931250  105367 out.go:304] Setting ErrFile to fd 2...
I0729 17:48:46.931254  105367 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 17:48:46.931419  105367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
I0729 17:48:46.932703  105367 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:48:46.933786  105367 config.go:182] Loaded profile config "functional-810151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 17:48:46.934326  105367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:48:46.934373  105367 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:48:46.950489  105367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
I0729 17:48:46.950999  105367 main.go:141] libmachine: () Calling .GetVersion
I0729 17:48:46.951540  105367 main.go:141] libmachine: Using API Version  1
I0729 17:48:46.951564  105367 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:48:46.951926  105367 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:48:46.952120  105367 main.go:141] libmachine: (functional-810151) Calling .GetState
I0729 17:48:46.954024  105367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 17:48:46.954068  105367 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 17:48:46.970172  105367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
I0729 17:48:46.970583  105367 main.go:141] libmachine: () Calling .GetVersion
I0729 17:48:46.971093  105367 main.go:141] libmachine: Using API Version  1
I0729 17:48:46.971114  105367 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 17:48:46.971461  105367 main.go:141] libmachine: () Calling .GetMachineName
I0729 17:48:46.971678  105367 main.go:141] libmachine: (functional-810151) Calling .DriverName
I0729 17:48:46.971926  105367 ssh_runner.go:195] Run: systemctl --version
I0729 17:48:46.971955  105367 main.go:141] libmachine: (functional-810151) Calling .GetSSHHostname
I0729 17:48:46.975115  105367 main.go:141] libmachine: (functional-810151) DBG | domain functional-810151 has defined MAC address 52:54:00:e6:c8:f2 in network mk-functional-810151
I0729 17:48:46.975549  105367 main.go:141] libmachine: (functional-810151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c8:f2", ip: ""} in network mk-functional-810151: {Iface:virbr1 ExpiryTime:2024-07-29 18:45:29 +0000 UTC Type:0 Mac:52:54:00:e6:c8:f2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-810151 Clientid:01:52:54:00:e6:c8:f2}
I0729 17:48:46.975570  105367 main.go:141] libmachine: (functional-810151) DBG | domain functional-810151 has defined IP address 192.168.39.176 and MAC address 52:54:00:e6:c8:f2 in network mk-functional-810151
I0729 17:48:46.975695  105367 main.go:141] libmachine: (functional-810151) Calling .GetSSHPort
I0729 17:48:46.975831  105367 main.go:141] libmachine: (functional-810151) Calling .GetSSHKeyPath
I0729 17:48:46.975930  105367 main.go:141] libmachine: (functional-810151) Calling .GetSSHUsername
I0729 17:48:46.976035  105367 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/functional-810151/id_rsa Username:docker}
I0729 17:48:47.091217  105367 build_images.go:161] Building image from path: /tmp/build.3632986371.tar
I0729 17:48:47.091302  105367 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 17:48:47.125754  105367 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3632986371.tar
I0729 17:48:47.138961  105367 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3632986371.tar: stat -c "%s %y" /var/lib/minikube/build/build.3632986371.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3632986371.tar': No such file or directory
I0729 17:48:47.138996  105367 ssh_runner.go:362] scp /tmp/build.3632986371.tar --> /var/lib/minikube/build/build.3632986371.tar (3072 bytes)
I0729 17:48:47.213156  105367 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3632986371
I0729 17:48:47.239275  105367 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3632986371 -xf /var/lib/minikube/build/build.3632986371.tar
I0729 17:48:47.260581  105367 crio.go:315] Building image: /var/lib/minikube/build/build.3632986371
I0729 17:48:47.260662  105367 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-810151 /var/lib/minikube/build/build.3632986371 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0729 17:48:54.906429  105367 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-810151 /var/lib/minikube/build/build.3632986371 --cgroup-manager=cgroupfs: (7.645733669s)
I0729 17:48:54.906526  105367 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3632986371
I0729 17:48:54.937265  105367 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3632986371.tar
I0729 17:48:54.950934  105367 build_images.go:217] Built localhost/my-image:functional-810151 from /tmp/build.3632986371.tar
I0729 17:48:54.950970  105367 build_images.go:133] succeeded building to: functional-810151
I0729 17:48:54.950978  105367 build_images.go:134] failed building to: 
I0729 17:48:54.951000  105367 main.go:141] libmachine: Making call to close driver server
I0729 17:48:54.951018  105367 main.go:141] libmachine: (functional-810151) Calling .Close
I0729 17:48:54.951319  105367 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:48:54.951345  105367 main.go:141] libmachine: (functional-810151) DBG | Closing plugin on server side
I0729 17:48:54.951348  105367 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 17:48:54.951387  105367 main.go:141] libmachine: Making call to close driver server
I0729 17:48:54.951399  105367 main.go:141] libmachine: (functional-810151) Calling .Close
I0729 17:48:54.951737  105367 main.go:141] libmachine: (functional-810151) DBG | Closing plugin on server side
I0729 17:48:54.951773  105367 main.go:141] libmachine: Successfully made call to close driver server
I0729 17:48:54.951785  105367 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-810151
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image load --daemon docker.io/kicbase/echo-server:functional-810151 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-810151 image load --daemon docker.io/kicbase/echo-server:functional-810151 --alsologtostderr: (3.124658048s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image load --daemon docker.io/kicbase/echo-server:functional-810151 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-810151
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image load --daemon docker.io/kicbase/echo-server:functional-810151 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image save docker.io/kicbase/echo-server:functional-810151 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image rm docker.io/kicbase/echo-server:functional-810151 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-810151
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 image save --daemon docker.io/kicbase/echo-server:functional-810151 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-810151
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "212.400466ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "46.10393ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "230.771168ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "45.269316ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdany-port1007492044/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722275308331628031" to /tmp/TestFunctionalparallelMountCmdany-port1007492044/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722275308331628031" to /tmp/TestFunctionalparallelMountCmdany-port1007492044/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722275308331628031" to /tmp/TestFunctionalparallelMountCmdany-port1007492044/001/test-1722275308331628031
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.277072ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 17:48 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 17:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 17:48 test-1722275308331628031
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh cat /mount-9p/test-1722275308331628031
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-810151 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c6e97ec4-baba-4c7c-b74c-90755ef831b2] Pending
helpers_test.go:344: "busybox-mount" [c6e97ec4-baba-4c7c-b74c-90755ef831b2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c6e97ec4-baba-4c7c-b74c-90755ef831b2] Running
E0729 17:48:37.179045   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [c6e97ec4-baba-4c7c-b74c-90755ef831b2] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c6e97ec4-baba-4c7c-b74c-90755ef831b2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.004562353s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-810151 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdany-port1007492044/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 service list -o json
functional_test.go:1490: Took "362.619955ms" to run "out/minikube-linux-amd64 -p functional-810151 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.176:31852
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.176:31852
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdspecific-port2555994436/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.436236ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2024/07/29 17:48:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdspecific-port2555994436/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-810151 ssh "sudo umount -f /mount-9p": exit status 1 (249.199899ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-810151 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdspecific-port2555994436/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189309052/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189309052/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189309052/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T" /mount1: exit status 1 (280.753928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-810151 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-810151 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189309052/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189309052/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-810151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1189309052/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-810151
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-810151
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-810151
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-794405 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 17:50:53.334855   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 17:51:21.019419   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-794405 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m25.38210337s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (206.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-794405 -- rollout status deployment/busybox: (3.242240835s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-8xr2r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-9t4xg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-kq6g2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-8xr2r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-9t4xg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-kq6g2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-8xr2r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-9t4xg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-kq6g2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-8xr2r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-8xr2r -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-9t4xg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-9t4xg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-kq6g2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794405 -- exec busybox-fc5497c4f-kq6g2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-794405 -v=7 --alsologtostderr
E0729 17:53:18.903040   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:18.908314   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:18.919257   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:18.939590   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:18.979907   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:19.060231   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:19.220657   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:19.540831   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:20.181522   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:21.462469   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:24.023674   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 17:53:29.144705   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-794405 -v=7 --alsologtostderr: (53.921745545s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-794405 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp testdata/cp-test.txt ha-794405:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1800704997/001/cp-test_ha-794405.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405:/home/docker/cp-test.txt ha-794405-m02:/home/docker/cp-test_ha-794405_ha-794405-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m02 "sudo cat /home/docker/cp-test_ha-794405_ha-794405-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405:/home/docker/cp-test.txt ha-794405-m03:/home/docker/cp-test_ha-794405_ha-794405-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m03 "sudo cat /home/docker/cp-test_ha-794405_ha-794405-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405:/home/docker/cp-test.txt ha-794405-m04:/home/docker/cp-test_ha-794405_ha-794405-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m04 "sudo cat /home/docker/cp-test_ha-794405_ha-794405-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp testdata/cp-test.txt ha-794405-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1800704997/001/cp-test_ha-794405-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m02:/home/docker/cp-test.txt ha-794405:/home/docker/cp-test_ha-794405-m02_ha-794405.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405 "sudo cat /home/docker/cp-test_ha-794405-m02_ha-794405.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m02:/home/docker/cp-test.txt ha-794405-m03:/home/docker/cp-test_ha-794405-m02_ha-794405-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m03 "sudo cat /home/docker/cp-test_ha-794405-m02_ha-794405-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m02:/home/docker/cp-test.txt ha-794405-m04:/home/docker/cp-test_ha-794405-m02_ha-794405-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m04 "sudo cat /home/docker/cp-test_ha-794405-m02_ha-794405-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp testdata/cp-test.txt ha-794405-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1800704997/001/cp-test_ha-794405-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt ha-794405:/home/docker/cp-test_ha-794405-m03_ha-794405.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405 "sudo cat /home/docker/cp-test_ha-794405-m03_ha-794405.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt ha-794405-m02:/home/docker/cp-test_ha-794405-m03_ha-794405-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m02 "sudo cat /home/docker/cp-test_ha-794405-m03_ha-794405-m02.txt"
E0729 17:53:39.385460   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m03:/home/docker/cp-test.txt ha-794405-m04:/home/docker/cp-test_ha-794405-m03_ha-794405-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m04 "sudo cat /home/docker/cp-test_ha-794405-m03_ha-794405-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp testdata/cp-test.txt ha-794405-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1800704997/001/cp-test_ha-794405-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt ha-794405:/home/docker/cp-test_ha-794405-m04_ha-794405.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405 "sudo cat /home/docker/cp-test_ha-794405-m04_ha-794405.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt ha-794405-m02:/home/docker/cp-test_ha-794405-m04_ha-794405-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m02 "sudo cat /home/docker/cp-test_ha-794405-m04_ha-794405-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 cp ha-794405-m04:/home/docker/cp-test.txt ha-794405-m03:/home/docker/cp-test_ha-794405-m04_ha-794405-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 ssh -n ha-794405-m03 "sudo cat /home/docker/cp-test_ha-794405-m04_ha-794405-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.450683497s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-794405 node delete m03 -v=7 --alsologtostderr: (16.45726396s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (323.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-794405 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 18:08:18.906702   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 18:09:41.948744   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 18:10:53.334544   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-794405 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m22.410612752s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (323.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-794405 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-794405 --control-plane -v=7 --alsologtostderr: (1m16.029489976s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-794405 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-743595 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0729 18:13:18.903712   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-743595 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m35.998729192s)
--- PASS: TestJSONOutput/start/Command (96.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-743595 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-743595 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-743595 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-743595 --output=json --user=testUser: (7.395079273s)
--- PASS: TestJSONOutput/stop/Command (7.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-061181 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-061181 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.756359ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"23f7995d-d044-48e4-9db9-b815649c18ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-061181] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca4017da-42e0-4a88-8b6f-2078ddbee22d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19339"}}
	{"specversion":"1.0","id":"3caa95b6-3b64-4d8d-aed8-b0941f663353","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4574a2ae-0b11-4f92-b74f-28a5f6545acd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig"}}
	{"specversion":"1.0","id":"6957318b-226a-4cca-a6f8-44344424e3fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube"}}
	{"specversion":"1.0","id":"b4fa61f1-8dc8-4989-b407-459c93b57244","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9523ae45-9117-4583-99b5-e136bc4df5de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dd635cd2-b71c-4ab8-8139-a7b795fd724c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-061181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-061181
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (85.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-293963 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-293963 --driver=kvm2  --container-runtime=crio: (42.982345785s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-297072 --driver=kvm2  --container-runtime=crio
E0729 18:15:53.334030   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-297072 --driver=kvm2  --container-runtime=crio: (40.36934013s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-293963
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-297072
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-297072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-297072
helpers_test.go:175: Cleaning up "first-293963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-293963
--- PASS: TestMinikubeProfile (85.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-816909 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-816909 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.463406739s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-816909 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-816909 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-835504 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-835504 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.259944218s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835504 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835504 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-816909 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835504 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835504 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-835504
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-835504: (1.272666978s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-835504
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-835504: (21.145496779s)
--- PASS: TestMountStart/serial/RestartStopped (22.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835504 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835504 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (120.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-976328 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 18:18:18.902817   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 18:18:56.381112   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-976328 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m0.076541456s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (120.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-976328 -- rollout status deployment/busybox: (2.391073077s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- exec busybox-fc5497c4f-mdnj5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- exec busybox-fc5497c4f-vkccz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- exec busybox-fc5497c4f-mdnj5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- exec busybox-fc5497c4f-vkccz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- exec busybox-fc5497c4f-mdnj5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- exec busybox-fc5497c4f-vkccz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- exec busybox-fc5497c4f-mdnj5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- exec busybox-fc5497c4f-mdnj5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- exec busybox-fc5497c4f-vkccz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-976328 -- exec busybox-fc5497c4f-vkccz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-976328 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-976328 -v 3 --alsologtostderr: (48.365193539s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.92s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-976328 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp testdata/cp-test.txt multinode-976328:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp multinode-976328:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile737376291/001/cp-test_multinode-976328.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp multinode-976328:/home/docker/cp-test.txt multinode-976328-m02:/home/docker/cp-test_multinode-976328_multinode-976328-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m02 "sudo cat /home/docker/cp-test_multinode-976328_multinode-976328-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp multinode-976328:/home/docker/cp-test.txt multinode-976328-m03:/home/docker/cp-test_multinode-976328_multinode-976328-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m03 "sudo cat /home/docker/cp-test_multinode-976328_multinode-976328-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp testdata/cp-test.txt multinode-976328-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp multinode-976328-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile737376291/001/cp-test_multinode-976328-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp multinode-976328-m02:/home/docker/cp-test.txt multinode-976328:/home/docker/cp-test_multinode-976328-m02_multinode-976328.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328 "sudo cat /home/docker/cp-test_multinode-976328-m02_multinode-976328.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp multinode-976328-m02:/home/docker/cp-test.txt multinode-976328-m03:/home/docker/cp-test_multinode-976328-m02_multinode-976328-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m03 "sudo cat /home/docker/cp-test_multinode-976328-m02_multinode-976328-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp testdata/cp-test.txt multinode-976328-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp multinode-976328-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile737376291/001/cp-test_multinode-976328-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp multinode-976328-m03:/home/docker/cp-test.txt multinode-976328:/home/docker/cp-test_multinode-976328-m03_multinode-976328.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328 "sudo cat /home/docker/cp-test_multinode-976328-m03_multinode-976328.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 cp multinode-976328-m03:/home/docker/cp-test.txt multinode-976328-m02:/home/docker/cp-test_multinode-976328-m03_multinode-976328-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 ssh -n multinode-976328-m02 "sudo cat /home/docker/cp-test_multinode-976328-m03_multinode-976328-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-976328 node stop m03: (1.336690434s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-976328 status: exit status 7 (404.747809ms)

                                                
                                                
-- stdout --
	multinode-976328
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-976328-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-976328-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-976328 status --alsologtostderr: exit status 7 (402.014893ms)

                                                
                                                
-- stdout --
	multinode-976328
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-976328-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-976328-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:20:18.606940  122954 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:20:18.607182  122954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:20:18.607191  122954 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:18.607195  122954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:20:18.607399  122954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:20:18.607600  122954 out.go:298] Setting JSON to false
	I0729 18:20:18.607628  122954 mustload.go:65] Loading cluster: multinode-976328
	I0729 18:20:18.607740  122954 notify.go:220] Checking for updates...
	I0729 18:20:18.608069  122954 config.go:182] Loaded profile config "multinode-976328": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:20:18.608085  122954 status.go:255] checking status of multinode-976328 ...
	I0729 18:20:18.608458  122954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:20:18.608532  122954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:20:18.628198  122954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0729 18:20:18.628573  122954 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:20:18.629149  122954 main.go:141] libmachine: Using API Version  1
	I0729 18:20:18.629178  122954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:20:18.629570  122954 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:20:18.629776  122954 main.go:141] libmachine: (multinode-976328) Calling .GetState
	I0729 18:20:18.631348  122954 status.go:330] multinode-976328 host status = "Running" (err=<nil>)
	I0729 18:20:18.631364  122954 host.go:66] Checking if "multinode-976328" exists ...
	I0729 18:20:18.631618  122954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:20:18.631655  122954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:20:18.646646  122954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I0729 18:20:18.647036  122954 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:20:18.647518  122954 main.go:141] libmachine: Using API Version  1
	I0729 18:20:18.647539  122954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:20:18.647870  122954 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:20:18.648065  122954 main.go:141] libmachine: (multinode-976328) Calling .GetIP
	I0729 18:20:18.650765  122954 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:20:18.651416  122954 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:20:18.651438  122954 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:20:18.651526  122954 host.go:66] Checking if "multinode-976328" exists ...
	I0729 18:20:18.651835  122954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:20:18.651881  122954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:20:18.666361  122954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
	I0729 18:20:18.666729  122954 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:20:18.667148  122954 main.go:141] libmachine: Using API Version  1
	I0729 18:20:18.667167  122954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:20:18.667462  122954 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:20:18.667635  122954 main.go:141] libmachine: (multinode-976328) Calling .DriverName
	I0729 18:20:18.667855  122954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:20:18.667883  122954 main.go:141] libmachine: (multinode-976328) Calling .GetSSHHostname
	I0729 18:20:18.670361  122954 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:20:18.670797  122954 main.go:141] libmachine: (multinode-976328) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e1:42", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:28 +0000 UTC Type:0 Mac:52:54:00:56:e1:42 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-976328 Clientid:01:52:54:00:56:e1:42}
	I0729 18:20:18.670832  122954 main.go:141] libmachine: (multinode-976328) DBG | domain multinode-976328 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:e1:42 in network mk-multinode-976328
	I0729 18:20:18.670999  122954 main.go:141] libmachine: (multinode-976328) Calling .GetSSHPort
	I0729 18:20:18.671170  122954 main.go:141] libmachine: (multinode-976328) Calling .GetSSHKeyPath
	I0729 18:20:18.671335  122954 main.go:141] libmachine: (multinode-976328) Calling .GetSSHUsername
	I0729 18:20:18.671462  122954 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/multinode-976328/id_rsa Username:docker}
	I0729 18:20:18.748042  122954 ssh_runner.go:195] Run: systemctl --version
	I0729 18:20:18.753779  122954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:20:18.768085  122954 kubeconfig.go:125] found "multinode-976328" server: "https://192.168.39.211:8443"
	I0729 18:20:18.768116  122954 api_server.go:166] Checking apiserver status ...
	I0729 18:20:18.768152  122954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:20:18.781416  122954 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup
	W0729 18:20:18.791206  122954 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:20:18.791282  122954 ssh_runner.go:195] Run: ls
	I0729 18:20:18.795416  122954 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I0729 18:20:18.800026  122954 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I0729 18:20:18.800051  122954 status.go:422] multinode-976328 apiserver status = Running (err=<nil>)
	I0729 18:20:18.800065  122954 status.go:257] multinode-976328 status: &{Name:multinode-976328 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:20:18.800088  122954 status.go:255] checking status of multinode-976328-m02 ...
	I0729 18:20:18.800382  122954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:20:18.800416  122954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:20:18.815726  122954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33399
	I0729 18:20:18.816112  122954 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:20:18.816611  122954 main.go:141] libmachine: Using API Version  1
	I0729 18:20:18.816636  122954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:20:18.816959  122954 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:20:18.817176  122954 main.go:141] libmachine: (multinode-976328-m02) Calling .GetState
	I0729 18:20:18.818643  122954 status.go:330] multinode-976328-m02 host status = "Running" (err=<nil>)
	I0729 18:20:18.818669  122954 host.go:66] Checking if "multinode-976328-m02" exists ...
	I0729 18:20:18.818958  122954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:20:18.818992  122954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:20:18.834373  122954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0729 18:20:18.834807  122954 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:20:18.835289  122954 main.go:141] libmachine: Using API Version  1
	I0729 18:20:18.835313  122954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:20:18.835674  122954 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:20:18.835896  122954 main.go:141] libmachine: (multinode-976328-m02) Calling .GetIP
	I0729 18:20:18.838767  122954 main.go:141] libmachine: (multinode-976328-m02) DBG | domain multinode-976328-m02 has defined MAC address 52:54:00:be:8b:fe in network mk-multinode-976328
	I0729 18:20:18.839258  122954 main.go:141] libmachine: (multinode-976328-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:8b:fe", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:18:38 +0000 UTC Type:0 Mac:52:54:00:be:8b:fe Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-976328-m02 Clientid:01:52:54:00:be:8b:fe}
	I0729 18:20:18.839279  122954 main.go:141] libmachine: (multinode-976328-m02) DBG | domain multinode-976328-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:be:8b:fe in network mk-multinode-976328
	I0729 18:20:18.839409  122954 host.go:66] Checking if "multinode-976328-m02" exists ...
	I0729 18:20:18.839713  122954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:20:18.839757  122954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:20:18.854410  122954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
	I0729 18:20:18.854809  122954 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:20:18.855242  122954 main.go:141] libmachine: Using API Version  1
	I0729 18:20:18.855264  122954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:20:18.855515  122954 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:20:18.855676  122954 main.go:141] libmachine: (multinode-976328-m02) Calling .DriverName
	I0729 18:20:18.855816  122954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:20:18.855833  122954 main.go:141] libmachine: (multinode-976328-m02) Calling .GetSSHHostname
	I0729 18:20:18.858219  122954 main.go:141] libmachine: (multinode-976328-m02) DBG | domain multinode-976328-m02 has defined MAC address 52:54:00:be:8b:fe in network mk-multinode-976328
	I0729 18:20:18.858537  122954 main.go:141] libmachine: (multinode-976328-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:8b:fe", ip: ""} in network mk-multinode-976328: {Iface:virbr1 ExpiryTime:2024-07-29 19:18:38 +0000 UTC Type:0 Mac:52:54:00:be:8b:fe Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-976328-m02 Clientid:01:52:54:00:be:8b:fe}
	I0729 18:20:18.858564  122954 main.go:141] libmachine: (multinode-976328-m02) DBG | domain multinode-976328-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:be:8b:fe in network mk-multinode-976328
	I0729 18:20:18.858745  122954 main.go:141] libmachine: (multinode-976328-m02) Calling .GetSSHPort
	I0729 18:20:18.858900  122954 main.go:141] libmachine: (multinode-976328-m02) Calling .GetSSHKeyPath
	I0729 18:20:18.859064  122954 main.go:141] libmachine: (multinode-976328-m02) Calling .GetSSHUsername
	I0729 18:20:18.859215  122954 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19339-88081/.minikube/machines/multinode-976328-m02/id_rsa Username:docker}
	I0729 18:20:18.936066  122954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:20:18.949564  122954 status.go:257] multinode-976328-m02 status: &{Name:multinode-976328-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:20:18.949598  122954 status.go:255] checking status of multinode-976328-m03 ...
	I0729 18:20:18.949914  122954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:20:18.949948  122954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:20:18.965072  122954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I0729 18:20:18.965498  122954 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:20:18.966009  122954 main.go:141] libmachine: Using API Version  1
	I0729 18:20:18.966037  122954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:20:18.966354  122954 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:20:18.966531  122954 main.go:141] libmachine: (multinode-976328-m03) Calling .GetState
	I0729 18:20:18.968114  122954 status.go:330] multinode-976328-m03 host status = "Stopped" (err=<nil>)
	I0729 18:20:18.968126  122954 status.go:343] host is not running, skipping remaining checks
	I0729 18:20:18.968131  122954 status.go:257] multinode-976328-m03 status: &{Name:multinode-976328-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 node start m03 -v=7 --alsologtostderr
E0729 18:20:53.334322   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-976328 node start m03 -v=7 --alsologtostderr: (36.141174152s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-976328 node delete m03: (1.781320985s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (184.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-976328 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 18:30:53.334111   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-976328 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m4.200469015s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-976328 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (184.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-976328
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-976328-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-976328-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.221727ms)

                                                
                                                
-- stdout --
	* [multinode-976328-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-976328-m02' is duplicated with machine name 'multinode-976328-m02' in profile 'multinode-976328'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-976328-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-976328-m03 --driver=kvm2  --container-runtime=crio: (43.917877407s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-976328
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-976328: exit status 80 (203.942844ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-976328 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-976328-m03 already exists in multinode-976328-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-976328-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.85s)

                                                
                                    
x
+
TestScheduledStopUnix (110.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-475735 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-475735 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.303586428s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-475735 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-475735 -n scheduled-stop-475735
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-475735 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-475735 --cancel-scheduled
E0729 18:38:18.906081   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-475735 -n scheduled-stop-475735
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-475735
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-475735 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-475735
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-475735: exit status 7 (60.617909ms)

                                                
                                                
-- stdout --
	scheduled-stop-475735
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-475735 -n scheduled-stop-475735
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-475735 -n scheduled-stop-475735: exit status 7 (60.550832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-475735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-475735
--- PASS: TestScheduledStopUnix (110.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (152.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3790064586 start -p running-upgrade-459882 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0729 18:40:53.334235   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3790064586 start -p running-upgrade-459882 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m15.05986771s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-459882 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-459882 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.794539531s)
helpers_test.go:175: Cleaning up "running-upgrade-459882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-459882
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-459882: (1.07014465s)
--- PASS: TestRunningBinaryUpgrade (152.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790573 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-790573 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (74.406564ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-790573] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (92.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790573 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790573 --driver=kvm2  --container-runtime=crio: (1m32.590907717s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-790573 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (92.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-085245 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-085245 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (97.302348ms)

                                                
                                                
-- stdout --
	* [false-085245] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:39:13.249766  130487 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:39:13.249900  130487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:39:13.249910  130487 out.go:304] Setting ErrFile to fd 2...
	I0729 18:39:13.249914  130487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:39:13.250079  130487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19339-88081/.minikube/bin
	I0729 18:39:13.250704  130487 out.go:298] Setting JSON to false
	I0729 18:39:13.251652  130487 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12073,"bootTime":1722266280,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:39:13.251707  130487 start.go:139] virtualization: kvm guest
	I0729 18:39:13.253701  130487 out.go:177] * [false-085245] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:39:13.254933  130487 notify.go:220] Checking for updates...
	I0729 18:39:13.254943  130487 out.go:177]   - MINIKUBE_LOCATION=19339
	I0729 18:39:13.256202  130487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:39:13.257266  130487 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19339-88081/kubeconfig
	I0729 18:39:13.258547  130487 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19339-88081/.minikube
	I0729 18:39:13.259693  130487 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:39:13.260705  130487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:39:13.262133  130487 config.go:182] Loaded profile config "NoKubernetes-790573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:39:13.262241  130487 config.go:182] Loaded profile config "force-systemd-env-801126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:39:13.262336  130487 config.go:182] Loaded profile config "offline-crio-778169": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:39:13.262420  130487 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:39:13.297765  130487 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:39:13.298838  130487 start.go:297] selected driver: kvm2
	I0729 18:39:13.298851  130487 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:39:13.298865  130487 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:39:13.300757  130487 out.go:177] 
	W0729 18:39:13.301795  130487 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0729 18:39:13.302784  130487 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-085245 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-085245" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-085245

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-085245"

                                                
                                                
----------------------- debugLogs end: false-085245 [took: 2.588963559s] --------------------------------
helpers_test.go:175: Cleaning up "false-085245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-085245
--- PASS: TestNetworkPlugins/group/false (2.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (149.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.83547626 start -p stopped-upgrade-931829 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.83547626 start -p stopped-upgrade-931829 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m17.184815405s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.83547626 -p stopped-upgrade-931829 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.83547626 -p stopped-upgrade-931829 stop: (1.313606097s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-931829 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-931829 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.425098873s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (149.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (37.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790573 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790573 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.310125558s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-790573 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-790573 status -o json: exit status 2 (240.31034ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-790573","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-790573
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (37.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790573 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790573 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.167484805s)
--- PASS: TestNoKubernetes/serial/Start (47.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-790573 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-790573 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.554267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.589371595s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.070891631s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-790573
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-790573: (1.300506249s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (31.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790573 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790573 --driver=kvm2  --container-runtime=crio: (31.363714566s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (31.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-790573 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-790573 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.449891ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (61.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-134415 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-134415 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m1.02709339s)
--- PASS: TestPause/serial/Start (61.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-931829
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m1.96052591s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m33.691030584s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-085245 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-085245 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-r4s8z" [8ac6c2c8-3c30-4b82-bcf8-959311c35526] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-r4s8z" [8ac6c2c8-3c30-4b82-bcf8-959311c35526] Running
E0729 18:45:53.334643   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004758891s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-085245 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m25.181896487s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m20.364343173s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tx5bk" [1c128430-0ef5-40dc-9cf4-8bc1c44237d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005310789s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-085245 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-085245 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4x7ss" [f87f1e34-c10f-4f1d-a8c6-310c7144adff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4x7ss" [f87f1e34-c10f-4f1d-a8c6-310c7144adff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.003615567s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-085245 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m20.76510882s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-085245 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (18.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-085245 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nmw2q" [c8d4ba2d-5532-4787-8094-2666f249063d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nmw2q" [c8d4ba2d-5532-4787-8094-2666f249063d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 18.005470872s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (18.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-085245 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-085245 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jh68d" [b0b82abc-f748-4ead-b832-a0ac162c1a9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jh68d" [b0b82abc-f748-4ead-b832-a0ac162c1a9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005081756s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-085245 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-085245 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (99.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m39.052216042s)
--- PASS: TestNetworkPlugins/group/bridge/Start (99.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (108.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0729 18:48:18.903668   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-085245 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m48.865961529s)
--- PASS: TestNetworkPlugins/group/calico/Start (108.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cbzg6" [d05f70bf-1665-4629-b481-4c48c65d6886] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00507183s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-085245 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-085245 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-085245 replace --force -f testdata/netcat-deployment.yaml: (1.982279266s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vdt2f" [4e7e846f-03da-4bf4-94b4-4a42bfc2b84a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vdt2f" [4e7e846f-03da-4bf4-94b4-4a42bfc2b84a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004132149s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-085245 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-085245 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-085245 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rq6vs" [da34e6bf-a894-4f81-a6a1-c0370a3c209b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rq6vs" [da34e6bf-a894-4f81-a6a1-c0370a3c209b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004355728s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-085245 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hblck" [6f7cf710-a652-4708-a685-08f4495d23ec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005618482s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-085245 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-085245 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-dcbr5" [f8797ffb-b34b-4cb4-b196-a2ebb3693d60] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-dcbr5" [f8797ffb-b34b-4cb4-b196-a2ebb3693d60] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005337479s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-524369 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-524369 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m12.653207625s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-085245 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-085245 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)
E0729 19:19:53.657085   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 19:20:04.528403   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-612270 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 18:50:47.657237   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:47.662551   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:47.672868   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:47.693226   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:47.733586   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:47.813947   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:47.974431   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:48.295036   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:48.935381   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:50.216348   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:52.777016   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:50:53.333737   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/addons-145541/client.crt: no such file or directory
E0729 18:50:57.898174   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:51:08.139185   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 18:51:28.620158   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-612270 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m40.667116152s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-524369 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2c1d938c-b045-4536-b678-2a79e9f4000f] Pending
helpers_test.go:344: "busybox" [2c1d938c-b045-4536-b678-2a79e9f4000f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2c1d938c-b045-4536-b678-2a79e9f4000f] Running
E0729 18:51:39.106748   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:51:39.112081   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:51:39.122369   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:51:39.142732   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:51:39.183353   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:51:39.264453   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:51:39.424889   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:51:39.745528   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
E0729 18:51:40.385712   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003830837s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-524369 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-524369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0729 18:51:41.666776   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-524369 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-612270 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4aa2ebbe-4e60-4b4c-b41b-80be3918a624] Pending
E0729 18:52:20.069629   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4aa2ebbe-4e60-4b4c-b41b-80be3918a624] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4aa2ebbe-4e60-4b4c-b41b-80be3918a624] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004108506s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-612270 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-612270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-612270 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (654.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-524369 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 18:54:19.550641   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
E0729 18:54:22.951120   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-524369 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (10m54.482296496s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-524369 -n no-preload-524369
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (654.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (568.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-612270 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 18:55:03.897405   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:55:04.528194   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:04.533500   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:04.543749   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:04.564055   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:04.604446   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:04.684781   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:04.845252   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:05.165860   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:05.806740   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:07.087354   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:09.647935   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:14.138422   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
E0729 18:55:14.768550   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:24.244632   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
E0729 18:55:25.008741   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/calico-085245/client.crt: no such file or directory
E0729 18:55:30.116742   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
E0729 18:55:34.619598   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/bridge-085245/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-612270 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m28.678780518s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612270 -n default-k8s-diff-port-612270
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (568.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-834964 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-834964 --alsologtostderr -v=3: (1.280163336s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-834964 -n old-k8s-version-834964: exit status 7 (64.272481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-834964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-453780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 19:01:39.106620   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/kindnet-085245/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-453780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (49.632651409s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-453780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-453780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.065008737s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-453780 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-453780 --alsologtostderr -v=3: (2.369589687s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-453780 -n newest-cni-453780
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-453780 -n newest-cni-453780: exit status 7 (64.099397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-453780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-453780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 19:02:40.400537   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-453780 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (35.009493507s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-453780 -n newest-cni-453780
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-453780 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-453780 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-453780 -n newest-cni-453780
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-453780 -n newest-cni-453780: exit status 2 (241.230161ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-453780 -n newest-cni-453780
E0729 19:02:46.273720   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/enable-default-cni-085245/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-453780 -n newest-cni-453780: exit status 2 (234.144532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-453780 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-453780 -n newest-cni-453780
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-453780 -n newest-cni-453780
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (98.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-368536 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 19:03:18.903167   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/functional-810151/client.crt: no such file or directory
E0729 19:03:38.589706   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/flannel-085245/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-368536 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m38.339826203s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (98.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-368536 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d1e5c0b3-2fac-43b2-aa1e-4fa5af2d93c5] Pending
helpers_test.go:344: "busybox" [d1e5c0b3-2fac-43b2-aa1e-4fa5af2d93c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d1e5c0b3-2fac-43b2-aa1e-4fa5af2d93c5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003980518s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-368536 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-368536 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-368536 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (625.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-368536 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 19:07:10.702088   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/auto-085245/client.crt: no such file or directory
E0729 19:07:40.400914   95282 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19339-88081/.minikube/profiles/custom-flannel-085245/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-368536 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m25.072596663s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-368536 -n embed-certs-368536
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (625.33s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 2.85
272 TestNetworkPlugins/group/cilium 3.03
287 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-085245 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-085245" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-085245

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-085245"

                                                
                                                
----------------------- debugLogs end: kubenet-085245 [took: 2.705774316s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-085245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-085245
--- SKIP: TestNetworkPlugins/group/kubenet (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-085245 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-085245" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-085245

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-085245" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-085245"

                                                
                                                
----------------------- debugLogs end: cilium-085245 [took: 2.895293235s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-085245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-085245
--- SKIP: TestNetworkPlugins/group/cilium (3.03s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-148539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-148539
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard